cve_id
stringlengths 13
16
| obtain_all_privilege
stringclasses 3
values | obtain_user_privilege
stringclasses 2
values | obtain_other_privilege
stringclasses 2
values | user_interaction_required
stringclasses 3
values | cvss2_vector_string
stringclasses 106
values | cvss2_access_vector
stringclasses 4
values | cvss2_access_complexity
stringclasses 4
values | cvss2_authentication
stringclasses 3
values | cvss2_confidentiality_impact
stringclasses 4
values | cvss2_integrity_impact
stringclasses 4
values | cvss2_availability_impact
stringclasses 4
values | cvss2_base_score
stringclasses 50
values | cvss3_vector_string
stringclasses 226
values | cvss3_attack_vector
stringclasses 5
values | cvss3_attack_complexity
stringclasses 3
values | cvss3_privileges_required
stringclasses 4
values | cvss3_user_interaction
stringclasses 3
values | cvss3_scope
stringclasses 3
values | cvss3_confidentiality_impact
stringclasses 4
values | cvss3_integrity_impact
stringclasses 4
values | cvss3_availability_impact
stringclasses 4
values | cvss3_base_score
stringclasses 55
values | cvss3_base_severity
stringclasses 5
values | exploitability_score
stringclasses 22
values | impact_score
stringclasses 15
values | ac_insuf_info
stringclasses 3
values | reference_json
stringlengths 221
23.3k
| problemtype_json
stringclasses 200
values | severity
stringclasses 4
values | cve_nodes
stringlengths 2
33.1k
| cve_description
stringlengths 64
1.99k
| cve_last_modified_date
stringlengths 17
17
| cve_published_date
stringlengths 17
17
| cwe_name
stringclasses 125
values | cwe_description
stringclasses 124
values | cwe_extended_description
stringclasses 95
values | cwe_url
stringclasses 124
values | cwe_is_category
int64 0
1
| commit_author
stringlengths 0
34
| commit_author_date
stringlengths 25
25
| commit_msg
stringlengths 0
13.3k
| commit_hash
stringlengths 40
40
| commit_is_merge
stringclasses 1
value | repo_name
stringclasses 467
values | repo_description
stringclasses 459
values | repo_date_created
stringclasses 467
values | repo_date_last_push
stringclasses 467
values | repo_homepage
stringclasses 294
values | repo_owner
stringclasses 470
values | repo_stars
stringclasses 406
values | repo_forks
stringclasses 352
values | function_name
stringlengths 3
120
| function_signature
stringlengths 6
640
| function_parameters
stringlengths 2
302
| function
stringlengths 12
114k
| function_token_count
stringlengths 1
5
| function_before_change
stringclasses 1
value | labels
int64 1
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-11 16:16:33+03:00 | Fix read_varint overflow | d708ed548e1d6f254ba81a21de8ba543a53b5598 | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __Pyx_PyInt_As_unsigned_char | __Pyx_PyInt_As_unsigned_char( PyObject * x) | ['x'] | static CYTHON_INLINE unsigned char __Pyx_PyInt_As_unsigned_char(PyObject *x) {
const unsigned char neg_one = (unsigned char) ((unsigned char) 0 - (unsigned char) 1), const_zero = (unsigned char) 0;
const int is_unsigned = neg_one > const_zero;
#if PY_MAJOR_VERSION < 3
if (likely(PyInt_Check(x))) {
if (sizeof(unsigned char) < sizeof(long)) {
__PYX_VERIFY_RETURN_INT(unsigned char, long, PyInt_AS_LONG(x))
} else {
long val = PyInt_AS_LONG(x);
if (is_unsigned && unlikely(val < 0)) {
goto raise_neg_overflow;
}
return (unsigned char) val;
}
} else
#endif
if (likely(PyLong_Check(x))) {
if (is_unsigned) {
#if CYTHON_USE_PYLONG_INTERNALS
const digit* digits = ((PyLongObject*)x)->ob_digit;
switch (Py_SIZE(x)) {
case 0: return (unsigned char) 0;
case 1: __PYX_VERIFY_RETURN_INT(unsigned char, digit, digits[0])
case 2:
if (8 * sizeof(unsigned char) > 1 * PyLong_SHIFT) {
if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {
__PYX_VERIFY_RETURN_INT(unsigned char, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
} else if (8 * sizeof(unsigned char) >= 2 * PyLong_SHIFT) {
return (unsigned char) (((((unsigned char)digits[1]) << PyLong_SHIFT) | (unsigned char)digits[0]));
}
}
break;
case 3:
if (8 * sizeof(unsigned char) > 2 * PyLong_SHIFT) {
if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {
__PYX_VERIFY_RETURN_INT(unsigned char, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
} else if (8 * sizeof(unsigned char) >= 3 * PyLong_SHIFT) {
return (unsigned char) (((((((unsigned char)digits[2]) << PyLong_SHIFT) | (unsigned char)digits[1]) << PyLong_SHIFT) | (unsigned char)digits[0]));
}
}
break;
case 4:
if (8 * sizeof(unsigned char) > 3 * PyLong_SHIFT) {
if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {
__PYX_VERIFY_RETURN_INT(unsigned char, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
} else if (8 * sizeof(unsigned char) >= 4 * PyLong_SHIFT) {
return (unsigned char) (((((((((unsigned char)digits[3]) << PyLong_SHIFT) | (unsigned char)digits[2]) << PyLong_SHIFT) | (unsigned char)digits[1]) << PyLong_SHIFT) | (unsigned char)digits[0]));
}
}
break;
}
#endif
#if CYTHON_COMPILING_IN_CPYTHON
if (unlikely(Py_SIZE(x) < 0)) {
goto raise_neg_overflow;
}
#else
{
int result = PyObject_RichCompareBool(x, Py_False, Py_LT);
if (unlikely(result < 0))
return (unsigned char) -1;
if (unlikely(result == 1))
goto raise_neg_overflow;
}
#endif
if (sizeof(unsigned char) <= sizeof(unsigned long)) {
__PYX_VERIFY_RETURN_INT_EXC(unsigned char, unsigned long, PyLong_AsUnsignedLong(x))
#ifdef HAVE_LONG_LONG
} else if (sizeof(unsigned char) <= sizeof(unsigned PY_LONG_LONG)) {
__PYX_VERIFY_RETURN_INT_EXC(unsigned char, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x))
#endif
}
} else {
#if CYTHON_USE_PYLONG_INTERNALS
const digit* digits = ((PyLongObject*)x)->ob_digit;
switch (Py_SIZE(x)) {
case 0: return (unsigned char) 0;
case -1: __PYX_VERIFY_RETURN_INT(unsigned char, sdigit, (sdigit) (-(sdigit)digits[0]))
case 1: __PYX_VERIFY_RETURN_INT(unsigned char, digit, +digits[0])
case -2:
if (8 * sizeof(unsigned char) - 1 > 1 * PyLong_SHIFT) {
if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {
__PYX_VERIFY_RETURN_INT(unsigned char, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
} else if (8 * sizeof(unsigned char) - 1 > 2 * PyLong_SHIFT) {
return (unsigned char) (((unsigned char)-1)*(((((unsigned char)digits[1]) << PyLong_SHIFT) | (unsigned char)digits[0])));
}
}
break;
case 2:
if (8 * sizeof(unsigned char) > 1 * PyLong_SHIFT) {
if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) {
__PYX_VERIFY_RETURN_INT(unsigned char, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
} else if (8 * sizeof(unsigned char) - 1 > 2 * PyLong_SHIFT) {
return (unsigned char) ((((((unsigned char)digits[1]) << PyLong_SHIFT) | (unsigned char)digits[0])));
}
}
break;
case -3:
if (8 * sizeof(unsigned char) - 1 > 2 * PyLong_SHIFT) {
if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {
__PYX_VERIFY_RETURN_INT(unsigned char, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
} else if (8 * sizeof(unsigned char) - 1 > 3 * PyLong_SHIFT) {
return (unsigned char) (((unsigned char)-1)*(((((((unsigned char)digits[2]) << PyLong_SHIFT) | (unsigned char)digits[1]) << PyLong_SHIFT) | (unsigned char)digits[0])));
}
}
break;
case 3:
if (8 * sizeof(unsigned char) > 2 * PyLong_SHIFT) {
if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) {
__PYX_VERIFY_RETURN_INT(unsigned char, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
} else if (8 * sizeof(unsigned char) - 1 > 3 * PyLong_SHIFT) {
return (unsigned char) ((((((((unsigned char)digits[2]) << PyLong_SHIFT) | (unsigned char)digits[1]) << PyLong_SHIFT) | (unsigned char)digits[0])));
}
}
break;
case -4:
if (8 * sizeof(unsigned char) - 1 > 3 * PyLong_SHIFT) {
if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {
__PYX_VERIFY_RETURN_INT(unsigned char, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
} else if (8 * sizeof(unsigned char) - 1 > 4 * PyLong_SHIFT) {
return (unsigned char) (((unsigned char)-1)*(((((((((unsigned char)digits[3]) << PyLong_SHIFT) | (unsigned char)digits[2]) << PyLong_SHIFT) | (unsigned char)digits[1]) << PyLong_SHIFT) | (unsigned char)digits[0])));
}
}
break;
case 4:
if (8 * sizeof(unsigned char) > 3 * PyLong_SHIFT) {
if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) {
__PYX_VERIFY_RETURN_INT(unsigned char, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])))
} else if (8 * sizeof(unsigned char) - 1 > 4 * PyLong_SHIFT) {
return (unsigned char) ((((((((((unsigned char)digits[3]) << PyLong_SHIFT) | (unsigned char)digits[2]) << PyLong_SHIFT) | (unsigned char)digits[1]) << PyLong_SHIFT) | (unsigned char)digits[0])));
}
}
break;
}
#endif
if (sizeof(unsigned char) <= sizeof(long)) {
__PYX_VERIFY_RETURN_INT_EXC(unsigned char, long, PyLong_AsLong(x))
#ifdef HAVE_LONG_LONG
} else if (sizeof(unsigned char) <= sizeof(PY_LONG_LONG)) {
__PYX_VERIFY_RETURN_INT_EXC(unsigned char, PY_LONG_LONG, PyLong_AsLongLong(x))
#endif
}
}
{
#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray)
PyErr_SetString(PyExc_RuntimeError,
"_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers");
#else
unsigned char val;
PyObject *v = __Pyx_PyNumber_IntOrLong(x);
#if PY_MAJOR_VERSION < 3
if (likely(v) && !PyLong_Check(v)) {
PyObject *tmp = v;
v = PyNumber_Long(tmp);
Py_DECREF(tmp);
}
#endif
if (likely(v)) {
int one = 1; int is_little = (int)*(unsigned char *)&one;
unsigned char *bytes = (unsigned char *)&val;
int ret = _PyLong_AsByteArray((PyLongObject *)v,
bytes, sizeof(val),
is_little, !is_unsigned);
Py_DECREF(v);
if (likely(!ret))
return val;
}
#endif
return (unsigned char) -1;
}
} else {
unsigned char val;
PyObject *tmp = __Pyx_PyNumber_IntOrLong(x);
if (!tmp) return (unsigned char) -1;
val = __Pyx_PyInt_As_unsigned_char(tmp);
Py_DECREF(tmp);
return val;
}
raise_overflow:
PyErr_SetString(PyExc_OverflowError,
"value too large to convert to unsigned char");
return (unsigned char) -1;
raise_neg_overflow:
PyErr_SetString(PyExc_OverflowError,
"can't convert negative value to unsigned char");
return (unsigned char) -1;
} | 2145 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-11 16:16:33+03:00 | Fix read_varint overflow | d708ed548e1d6f254ba81a21de8ba543a53b5598 | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_pf_17clickhouse_driver_6varint_2read_varint | __pyx_pf_17clickhouse_driver_6varint_2read_varint( CYTHON_UNUSED PyObject * __pyx_self , PyObject * __pyx_v_f) | ['__pyx_self', '__pyx_v_f'] | static PyObject *__pyx_pf_17clickhouse_driver_6varint_2read_varint(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_f) {
Py_ssize_t __pyx_v_shift;
Py_ssize_t __pyx_v_result;
unsigned char __pyx_v_i;
PyObject *__pyx_v_read_one = NULL;
PyObject *__pyx_r = NULL;
__Pyx_RefNannyDeclarations
PyObject *__pyx_t_1 = NULL;
PyObject *__pyx_t_2 = NULL;
PyObject *__pyx_t_3 = NULL;
unsigned char __pyx_t_4;
int __pyx_t_5;
__Pyx_RefNannySetupContext("read_varint", 0);
/* "clickhouse_driver/varint.pyx":33
* Reads integer of variable length using LEB128.
* """
* cdef Py_ssize_t shift = 0 # <<<<<<<<<<<<<<
* cdef Py_ssize_t result = 0
* cdef unsigned char i
*/
__pyx_v_shift = 0;
/* "clickhouse_driver/varint.pyx":34
* """
* cdef Py_ssize_t shift = 0
* cdef Py_ssize_t result = 0 # <<<<<<<<<<<<<<
* cdef unsigned char i
*
*/
__pyx_v_result = 0;
/* "clickhouse_driver/varint.pyx":37
* cdef unsigned char i
*
* read_one = f.read_one # <<<<<<<<<<<<<<
*
* while True:
*/
__pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_f, __pyx_n_s_read_one); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 37, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__pyx_v_read_one = __pyx_t_1;
__pyx_t_1 = 0;
/* "clickhouse_driver/varint.pyx":39
* read_one = f.read_one
*
* while True: # <<<<<<<<<<<<<<
* i = read_one()
* result |= (i & 0x7f) << shift
*/
while (1) {
/* "clickhouse_driver/varint.pyx":40
*
* while True:
* i = read_one() # <<<<<<<<<<<<<<
* result |= (i & 0x7f) << shift
* shift += 7
*/
__Pyx_INCREF(__pyx_v_read_one);
__pyx_t_2 = __pyx_v_read_one; __pyx_t_3 = NULL;
if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
__pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
if (likely(__pyx_t_3)) {
PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
__Pyx_INCREF(__pyx_t_3);
__Pyx_INCREF(function);
__Pyx_DECREF_SET(__pyx_t_2, function);
}
}
__pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_3) : __Pyx_PyObject_CallNoArg(__pyx_t_2);
__Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 40, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
__pyx_t_4 = __Pyx_PyInt_As_unsigned_char(__pyx_t_1); if (unlikely((__pyx_t_4 == (unsigned char)-1) && PyErr_Occurred())) __PYX_ERR(0, 40, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
__pyx_v_i = __pyx_t_4;
/* "clickhouse_driver/varint.pyx":41
* while True:
* i = read_one()
* result |= (i & 0x7f) << shift # <<<<<<<<<<<<<<
* shift += 7
* if i < 0x80:
*/
__pyx_v_result = (__pyx_v_result | ((__pyx_v_i & 0x7f) << __pyx_v_shift));
/* "clickhouse_driver/varint.pyx":42
* i = read_one()
* result |= (i & 0x7f) << shift
* shift += 7 # <<<<<<<<<<<<<<
* if i < 0x80:
* break
*/
__pyx_v_shift = (__pyx_v_shift + 7);
/* "clickhouse_driver/varint.pyx":43
* result |= (i & 0x7f) << shift
* shift += 7
* if i < 0x80: # <<<<<<<<<<<<<<
* break
*
*/
__pyx_t_5 = ((__pyx_v_i < 0x80) != 0);
if (__pyx_t_5) {
/* "clickhouse_driver/varint.pyx":44
* shift += 7
* if i < 0x80:
* break # <<<<<<<<<<<<<<
*
* return result
*/
goto __pyx_L4_break;
/* "clickhouse_driver/varint.pyx":43
* result |= (i & 0x7f) << shift
* shift += 7
* if i < 0x80: # <<<<<<<<<<<<<<
* break
*
*/
}
}
__pyx_L4_break:;
/* "clickhouse_driver/varint.pyx":46
* break
*
* return result # <<<<<<<<<<<<<<
*/
__Pyx_XDECREF(__pyx_r);
__pyx_t_1 = PyInt_FromSsize_t(__pyx_v_result); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 46, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__pyx_r = __pyx_t_1;
__pyx_t_1 = 0;
goto __pyx_L0;
/* "clickhouse_driver/varint.pyx":29
*
*
* def read_varint(f): # <<<<<<<<<<<<<<
* """
* Reads integer of variable length using LEB128.
*/
/* function exit code */
__pyx_L1_error:;
__Pyx_XDECREF(__pyx_t_1);
__Pyx_XDECREF(__pyx_t_2);
__Pyx_XDECREF(__pyx_t_3);
__Pyx_AddTraceback("clickhouse_driver.varint.read_varint", __pyx_clineno, __pyx_lineno, __pyx_filename);
__pyx_r = NULL;
__pyx_L0:;
__Pyx_XDECREF(__pyx_v_read_one);
__Pyx_XGIVEREF(__pyx_r);
__Pyx_RefNannyFinishContext();
return __pyx_r;
} | 440 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-11 16:16:33+03:00 | Fix read_varint overflow | d708ed548e1d6f254ba81a21de8ba543a53b5598 | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_pf_17clickhouse_driver_6varint_write_varint | __pyx_pf_17clickhouse_driver_6varint_write_varint( CYTHON_UNUSED PyObject * __pyx_self , Py_ssize_t __pyx_v_number , PyObject * __pyx_v_buf) | ['__pyx_self', '__pyx_v_number', '__pyx_v_buf'] | static PyObject *__pyx_pf_17clickhouse_driver_6varint_write_varint(CYTHON_UNUSED PyObject *__pyx_self, Py_ssize_t __pyx_v_number, PyObject *__pyx_v_buf) {
Py_ssize_t __pyx_v_i;
unsigned char __pyx_v_towrite;
unsigned char __pyx_v_num_buf[32];
PyObject *__pyx_r = NULL;
__Pyx_RefNannyDeclarations
int __pyx_t_1;
PyObject *__pyx_t_2 = NULL;
PyObject *__pyx_t_3 = NULL;
PyObject *__pyx_t_4 = NULL;
PyObject *__pyx_t_5 = NULL;
__Pyx_RefNannySetupContext("write_varint", 0);
/* "clickhouse_driver/varint.pyx":8
* Writes integer of variable length using LEB128.
* """
* cdef Py_ssize_t i = 0 # <<<<<<<<<<<<<<
* cdef unsigned char towrite
* # Py_ssize_t checks integer on function call and
*/
__pyx_v_i = 0;
/* "clickhouse_driver/varint.pyx":15
* cdef unsigned char num_buf[32]
*
* while True: # <<<<<<<<<<<<<<
* towrite = number & 0x7f
* number >>= 7
*/
while (1) {
/* "clickhouse_driver/varint.pyx":16
*
* while True:
* towrite = number & 0x7f # <<<<<<<<<<<<<<
* number >>= 7
* if number:
*/
__pyx_v_towrite = (__pyx_v_number & 0x7f);
/* "clickhouse_driver/varint.pyx":17
* while True:
* towrite = number & 0x7f
* number >>= 7 # <<<<<<<<<<<<<<
* if number:
* num_buf[i] = towrite | 0x80
*/
__pyx_v_number = (__pyx_v_number >> 7);
/* "clickhouse_driver/varint.pyx":18
* towrite = number & 0x7f
* number >>= 7
* if number: # <<<<<<<<<<<<<<
* num_buf[i] = towrite | 0x80
* i += 1
*/
__pyx_t_1 = (__pyx_v_number != 0);
if (__pyx_t_1) {
/* "clickhouse_driver/varint.pyx":19
* number >>= 7
* if number:
* num_buf[i] = towrite | 0x80 # <<<<<<<<<<<<<<
* i += 1
* else:
*/
(__pyx_v_num_buf[__pyx_v_i]) = (__pyx_v_towrite | 0x80);
/* "clickhouse_driver/varint.pyx":20
* if number:
* num_buf[i] = towrite | 0x80
* i += 1 # <<<<<<<<<<<<<<
* else:
* num_buf[i] = towrite
*/
__pyx_v_i = (__pyx_v_i + 1);
/* "clickhouse_driver/varint.pyx":18
* towrite = number & 0x7f
* number >>= 7
* if number: # <<<<<<<<<<<<<<
* num_buf[i] = towrite | 0x80
* i += 1
*/
goto __pyx_L5;
}
/* "clickhouse_driver/varint.pyx":22
* i += 1
* else:
* num_buf[i] = towrite # <<<<<<<<<<<<<<
* i += 1
* break
*/
/*else*/ {
(__pyx_v_num_buf[__pyx_v_i]) = __pyx_v_towrite;
/* "clickhouse_driver/varint.pyx":23
* else:
* num_buf[i] = towrite
* i += 1 # <<<<<<<<<<<<<<
* break
*
*/
__pyx_v_i = (__pyx_v_i + 1);
/* "clickhouse_driver/varint.pyx":24
* num_buf[i] = towrite
* i += 1
* break # <<<<<<<<<<<<<<
*
* buf.write(PyBytes_FromStringAndSize(<char *>num_buf, i))
*/
goto __pyx_L4_break;
}
__pyx_L5:;
}
__pyx_L4_break:;
/* "clickhouse_driver/varint.pyx":26
* break
*
* buf.write(PyBytes_FromStringAndSize(<char *>num_buf, i)) # <<<<<<<<<<<<<<
*
*
*/
__pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_buf, __pyx_n_s_write); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 26, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__pyx_t_4 = PyBytes_FromStringAndSize(((char *)__pyx_v_num_buf), __pyx_v_i); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 26, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
__pyx_t_5 = NULL;
if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) {
__pyx_t_5 = PyMethod_GET_SELF(__pyx_t_3);
if (likely(__pyx_t_5)) {
PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);
__Pyx_INCREF(__pyx_t_5);
__Pyx_INCREF(function);
__Pyx_DECREF_SET(__pyx_t_3, function);
}
}
__pyx_t_2 = (__pyx_t_5) ? __Pyx_PyObject_Call2Args(__pyx_t_3, __pyx_t_5, __pyx_t_4) : __Pyx_PyObject_CallOneArg(__pyx_t_3, __pyx_t_4);
__Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
__Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 26, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
/* "clickhouse_driver/varint.pyx":4
*
*
* def write_varint(Py_ssize_t number, buf): # <<<<<<<<<<<<<<
* """
* Writes integer of variable length using LEB128.
*/
/* function exit code */
__pyx_r = Py_None; __Pyx_INCREF(Py_None);
goto __pyx_L0;
__pyx_L1_error:;
__Pyx_XDECREF(__pyx_t_2);
__Pyx_XDECREF(__pyx_t_3);
__Pyx_XDECREF(__pyx_t_4);
__Pyx_XDECREF(__pyx_t_5);
__Pyx_AddTraceback("clickhouse_driver.varint.write_varint", __pyx_clineno, __pyx_lineno, __pyx_filename);
__pyx_r = NULL;
__pyx_L0:;
__Pyx_XGIVEREF(__pyx_r);
__Pyx_RefNannyFinishContext();
return __pyx_r;
} | 434 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-11 16:16:33+03:00 | Fix read_varint overflow | d708ed548e1d6f254ba81a21de8ba543a53b5598 | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_pw_17clickhouse_driver_6varint_1write_varint | __pyx_pw_17clickhouse_driver_6varint_1write_varint( PyObject * __pyx_self , PyObject * __pyx_args , PyObject * __pyx_kwds) | ['__pyx_self', '__pyx_args', '__pyx_kwds'] | static PyObject *__pyx_pw_17clickhouse_driver_6varint_1write_varint(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
Py_ssize_t __pyx_v_number;
PyObject *__pyx_v_buf = 0;
PyObject *__pyx_r = 0;
__Pyx_RefNannyDeclarations
__Pyx_RefNannySetupContext("write_varint (wrapper)", 0);
{
static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_number,&__pyx_n_s_buf,0};
PyObject* values[2] = {0,0};
if (unlikely(__pyx_kwds)) {
Py_ssize_t kw_args;
const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
switch (pos_args) {
case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
CYTHON_FALLTHROUGH;
case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
CYTHON_FALLTHROUGH;
case 0: break;
default: goto __pyx_L5_argtuple_error;
}
kw_args = PyDict_Size(__pyx_kwds);
switch (pos_args) {
case 0:
if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_number)) != 0)) kw_args--;
else goto __pyx_L5_argtuple_error;
CYTHON_FALLTHROUGH;
case 1:
if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_buf)) != 0)) kw_args--;
else {
__Pyx_RaiseArgtupleInvalid("write_varint", 1, 2, 2, 1); __PYX_ERR(0, 4, __pyx_L3_error)
}
}
if (unlikely(kw_args > 0)) {
if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "write_varint") < 0)) __PYX_ERR(0, 4, __pyx_L3_error)
}
} else if (PyTuple_GET_SIZE(__pyx_args) != 2) {
goto __pyx_L5_argtuple_error;
} else {
values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
}
__pyx_v_number = __Pyx_PyIndex_AsSsize_t(values[0]); if (unlikely((__pyx_v_number == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 4, __pyx_L3_error)
__pyx_v_buf = values[1];
}
goto __pyx_L4_argument_unpacking_done;
__pyx_L5_argtuple_error:;
__Pyx_RaiseArgtupleInvalid("write_varint", 1, 2, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 4, __pyx_L3_error)
__pyx_L3_error:;
__Pyx_AddTraceback("clickhouse_driver.varint.write_varint", __pyx_clineno, __pyx_lineno, __pyx_filename);
__Pyx_RefNannyFinishContext();
return NULL;
__pyx_L4_argument_unpacking_done:;
__pyx_r = __pyx_pf_17clickhouse_driver_6varint_write_varint(__pyx_self, __pyx_v_number, __pyx_v_buf);
/* function exit code */
__Pyx_RefNannyFinishContext();
return __pyx_r;
} | 438 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-11 16:16:33+03:00 | Fix read_varint overflow | d708ed548e1d6f254ba81a21de8ba543a53b5598 | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_pymod_exec_varint | __pyx_pymod_exec_varint( PyObject * __pyx_pyinit_module) | ['__pyx_pyinit_module'] | static CYTHON_SMALL_CODE int __pyx_pymod_exec_varint(PyObject *__pyx_pyinit_module)
#endif
#endif
{
PyObject *__pyx_t_1 = NULL;
__Pyx_RefNannyDeclarations
#if CYTHON_PEP489_MULTI_PHASE_INIT
if (__pyx_m) {
if (__pyx_m == __pyx_pyinit_module) return 0;
PyErr_SetString(PyExc_RuntimeError, "Module 'varint' has already been imported. Re-initialisation is not supported.");
return -1;
}
#elif PY_MAJOR_VERSION >= 3
if (__pyx_m) return __Pyx_NewRef(__pyx_m);
#endif
#if CYTHON_REFNANNY
__Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny");
if (!__Pyx_RefNanny) {
PyErr_Clear();
__Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny");
if (!__Pyx_RefNanny)
Py_FatalError("failed to import 'refnanny' module");
}
#endif
__Pyx_RefNannySetupContext("__Pyx_PyMODINIT_FUNC PyInit_varint(void)", 0);
if (__Pyx_check_binary_version() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#ifdef __Pxy_PyFrame_Initialize_Offsets
__Pxy_PyFrame_Initialize_Offsets();
#endif
__pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) __PYX_ERR(0, 1, __pyx_L1_error)
__pyx_empty_bytes = PyBytes_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) __PYX_ERR(0, 1, __pyx_L1_error)
__pyx_empty_unicode = PyUnicode_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_unicode)) __PYX_ERR(0, 1, __pyx_L1_error)
#ifdef __Pyx_CyFunction_USED
if (__pyx_CyFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
#ifdef __Pyx_FusedFunction_USED
if (__pyx_FusedFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
#ifdef __Pyx_Coroutine_USED
if (__pyx_Coroutine_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
#ifdef __Pyx_Generator_USED
if (__pyx_Generator_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
#ifdef __Pyx_AsyncGen_USED
if (__pyx_AsyncGen_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
#ifdef __Pyx_StopAsyncIteration_USED
if (__pyx_StopAsyncIteration_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
/*--- Library function declarations ---*/
/*--- Threads initialization code ---*/
#if defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS
#ifdef WITH_THREAD /* Python build with threading support? */
PyEval_InitThreads();
#endif
#endif
/*--- Module creation code ---*/
#if CYTHON_PEP489_MULTI_PHASE_INIT
__pyx_m = __pyx_pyinit_module;
Py_INCREF(__pyx_m);
#else
#if PY_MAJOR_VERSION < 3
__pyx_m = Py_InitModule4("varint", __pyx_methods, 0, 0, PYTHON_API_VERSION); Py_XINCREF(__pyx_m);
#else
__pyx_m = PyModule_Create(&__pyx_moduledef);
#endif
if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
__pyx_d = PyModule_GetDict(__pyx_m); if (unlikely(!__pyx_d)) __PYX_ERR(0, 1, __pyx_L1_error)
Py_INCREF(__pyx_d);
__pyx_b = PyImport_AddModule(__Pyx_BUILTIN_MODULE_NAME); if (unlikely(!__pyx_b)) __PYX_ERR(0, 1, __pyx_L1_error)
Py_INCREF(__pyx_b);
__pyx_cython_runtime = PyImport_AddModule((char *) "cython_runtime"); if (unlikely(!__pyx_cython_runtime)) __PYX_ERR(0, 1, __pyx_L1_error)
Py_INCREF(__pyx_cython_runtime);
if (PyObject_SetAttrString(__pyx_m, "__builtins__", __pyx_b) < 0) __PYX_ERR(0, 1, __pyx_L1_error);
/*--- Initialize various global constants etc. ---*/
if (__Pyx_InitGlobals() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#if PY_MAJOR_VERSION < 3 && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT)
if (__Pyx_init_sys_getdefaultencoding_params() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
if (__pyx_module_is_main_clickhouse_driver__varint) {
if (PyObject_SetAttr(__pyx_m, __pyx_n_s_name, __pyx_n_s_main) < 0) __PYX_ERR(0, 1, __pyx_L1_error)
}
#if PY_MAJOR_VERSION >= 3
{
PyObject *modules = PyImport_GetModuleDict(); if (unlikely(!modules)) __PYX_ERR(0, 1, __pyx_L1_error)
if (!PyDict_GetItemString(modules, "clickhouse_driver.varint")) {
if (unlikely(PyDict_SetItemString(modules, "clickhouse_driver.varint", __pyx_m) < 0)) __PYX_ERR(0, 1, __pyx_L1_error)
}
}
#endif
/*--- Builtin init code ---*/
if (__Pyx_InitCachedBuiltins() < 0) goto __pyx_L1_error;
/*--- Constants init code ---*/
if (__Pyx_InitCachedConstants() < 0) goto __pyx_L1_error;
/*--- Global type/function init code ---*/
(void)__Pyx_modinit_global_init_code();
(void)__Pyx_modinit_variable_export_code();
(void)__Pyx_modinit_function_export_code();
(void)__Pyx_modinit_type_init_code();
if (unlikely(__Pyx_modinit_type_import_code() != 0)) goto __pyx_L1_error;
(void)__Pyx_modinit_variable_import_code();
(void)__Pyx_modinit_function_import_code();
/*--- Execution code ---*/
#if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED)
if (__Pyx_patch_abc() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
/* "clickhouse_driver/varint.pyx":4
*
*
* def write_varint(Py_ssize_t number, buf): # <<<<<<<<<<<<<<
* """
* Writes integer of variable length using LEB128.
*/
__pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_17clickhouse_driver_6varint_1write_varint, NULL, __pyx_n_s_clickhouse_driver_varint); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 4, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
if (PyDict_SetItem(__pyx_d, __pyx_n_s_write_varint, __pyx_t_1) < 0) __PYX_ERR(0, 4, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
/* "clickhouse_driver/varint.pyx":29
*
*
* def read_varint(f): # <<<<<<<<<<<<<<
* """
* Reads integer of variable length using LEB128.
*/
__pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_17clickhouse_driver_6varint_3read_varint, NULL, __pyx_n_s_clickhouse_driver_varint); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 29, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
if (PyDict_SetItem(__pyx_d, __pyx_n_s_read_varint, __pyx_t_1) < 0) __PYX_ERR(0, 29, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
/* "clickhouse_driver/varint.pyx":1
* from cpython cimport Py_INCREF, PyBytes_FromStringAndSize # <<<<<<<<<<<<<<
*
*
*/
__pyx_t_1 = __Pyx_PyDict_NewPresized(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
if (PyDict_SetItem(__pyx_d, __pyx_n_s_test, __pyx_t_1) < 0) __PYX_ERR(0, 1, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
/*--- Wrapped vars code ---*/
goto __pyx_L0;
__pyx_L1_error:;
__Pyx_XDECREF(__pyx_t_1);
if (__pyx_m) {
if (__pyx_d) {
__Pyx_AddTraceback("init clickhouse_driver.varint", __pyx_clineno, __pyx_lineno, __pyx_filename);
}
Py_CLEAR(__pyx_m);
} else if (!PyErr_Occurred()) {
PyErr_SetString(PyExc_ImportError, "init clickhouse_driver.varint");
}
__pyx_L0:;
__Pyx_RefNannyFinishContext();
#if CYTHON_PEP489_MULTI_PHASE_INIT
return (__pyx_m != NULL) ? 0 : -1;
#elif PY_MAJOR_VERSION >= 3
return __pyx_m;
#else
return;
#endif
} | 928 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __Pyx_InitCachedConstants | __Pyx_InitCachedConstants( void) | ['void'] | static CYTHON_SMALL_CODE int __Pyx_InitCachedConstants(void) {
__Pyx_RefNannyDeclarations
__Pyx_RefNannySetupContext("__Pyx_InitCachedConstants", 0);
/* "clickhouse_driver/bufferedreader.pyx":191
*
* if self.current_buffer_size == 0:
* raise EOFError('Unexpected EOF while reading bytes') # <<<<<<<<<<<<<<
*
*
*/
__pyx_tuple_ = PyTuple_Pack(1, __pyx_kp_u_Unexpected_EOF_while_reading_byt); if (unlikely(!__pyx_tuple_)) __PYX_ERR(0, 191, __pyx_L1_error)
__Pyx_GOTREF(__pyx_tuple_);
__Pyx_GIVEREF(__pyx_tuple_);
/* "(tree fragment)":1
* def __pyx_unpickle_BufferedReader(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<<
* cdef object __pyx_PickleError
* cdef object __pyx_result
*/
__pyx_tuple__2 = PyTuple_Pack(5, __pyx_n_s_pyx_type, __pyx_n_s_pyx_checksum, __pyx_n_s_pyx_state, __pyx_n_s_pyx_PickleError, __pyx_n_s_pyx_result); if (unlikely(!__pyx_tuple__2)) __PYX_ERR(1, 1, __pyx_L1_error)
__Pyx_GOTREF(__pyx_tuple__2);
__Pyx_GIVEREF(__pyx_tuple__2);
__pyx_codeobj__3 = (PyObject*)__Pyx_PyCode_New(3, 0, 5, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__2, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_stringsource, __pyx_n_s_pyx_unpickle_BufferedReader, 1, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__3)) __PYX_ERR(1, 1, __pyx_L1_error)
__pyx_tuple__4 = PyTuple_Pack(5, __pyx_n_s_pyx_type, __pyx_n_s_pyx_checksum, __pyx_n_s_pyx_state, __pyx_n_s_pyx_PickleError, __pyx_n_s_pyx_result); if (unlikely(!__pyx_tuple__4)) __PYX_ERR(1, 1, __pyx_L1_error)
__Pyx_GOTREF(__pyx_tuple__4);
__Pyx_GIVEREF(__pyx_tuple__4);
__pyx_codeobj__5 = (PyObject*)__Pyx_PyCode_New(3, 0, 5, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__4, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_stringsource, __pyx_n_s_pyx_unpickle_BufferedSocketRea, 1, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__5)) __PYX_ERR(1, 1, __pyx_L1_error)
__pyx_tuple__6 = PyTuple_Pack(5, __pyx_n_s_pyx_type, __pyx_n_s_pyx_checksum, __pyx_n_s_pyx_state, __pyx_n_s_pyx_PickleError, __pyx_n_s_pyx_result); if (unlikely(!__pyx_tuple__6)) __PYX_ERR(1, 1, __pyx_L1_error)
__Pyx_GOTREF(__pyx_tuple__6);
__Pyx_GIVEREF(__pyx_tuple__6);
__pyx_codeobj__7 = (PyObject*)__Pyx_PyCode_New(3, 0, 5, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__6, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_stringsource, __pyx_n_s_pyx_unpickle_CompressedBuffere, 1, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__7)) __PYX_ERR(1, 1, __pyx_L1_error)
__Pyx_RefNannyFinishContext();
return 0;
__pyx_L1_error:;
__Pyx_RefNannyFinishContext();
return -1;
} | 367 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __Pyx_decode_c_string | __Pyx_decode_c_string( const char * cstring , Py_ssize_t start , Py_ssize_t stop , const char * encoding , const char * errors , PyObject *(*decode_func)(const char*s,Py_ssize_t size,const char*errors)) | ['cstring', 'start', 'stop', 'encoding', 'errors'] | static CYTHON_INLINE PyObject* __Pyx_decode_c_string(
const char* cstring, Py_ssize_t start, Py_ssize_t stop,
const char* encoding, const char* errors,
PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors)) {
Py_ssize_t length;
if (unlikely((start < 0) | (stop < 0))) {
size_t slen = strlen(cstring);
if (unlikely(slen > (size_t) PY_SSIZE_T_MAX)) {
PyErr_SetString(PyExc_OverflowError,
"c-string too long to convert to Python");
return NULL;
}
length = (Py_ssize_t) slen;
if (start < 0) {
start += length;
if (start < 0)
start = 0;
}
if (stop < 0)
stop += length;
}
if (unlikely(stop <= start))
return PyUnicode_FromUnicode(NULL, 0);
length = stop - start;
cstring += start;
if (decode_func) {
return decode_func(cstring, length, errors);
} else {
return PyUnicode_Decode(cstring, length, encoding, errors);
}
} | 197 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_f_17clickhouse_driver_14bufferedreader___pyx_unpickle_BufferedReader__set_state | __pyx_f_17clickhouse_driver_14bufferedreader___pyx_unpickle_BufferedReader__set_state( struct __pyx_obj_17clickhouse_driver_14bufferedreader_BufferedReader * __pyx_v___pyx_result , PyObject * __pyx_v___pyx_state) | ['__pyx_v___pyx_result', '__pyx_v___pyx_state'] | static PyObject *__pyx_f_17clickhouse_driver_14bufferedreader___pyx_unpickle_BufferedReader__set_state(struct __pyx_obj_17clickhouse_driver_14bufferedreader_BufferedReader *__pyx_v___pyx_result, PyObject *__pyx_v___pyx_state) {
PyObject *__pyx_r = NULL;
__Pyx_RefNannyDeclarations
PyObject *__pyx_t_1 = NULL;
Py_ssize_t __pyx_t_2;
int __pyx_t_3;
int __pyx_t_4;
int __pyx_t_5;
PyObject *__pyx_t_6 = NULL;
PyObject *__pyx_t_7 = NULL;
PyObject *__pyx_t_8 = NULL;
__Pyx_RefNannySetupContext("__pyx_unpickle_BufferedReader__set_state", 0);
/* "(tree fragment)":12
* return __pyx_result
* cdef __pyx_unpickle_BufferedReader__set_state(BufferedReader __pyx_result, tuple __pyx_state):
* __pyx_result.buffer = __pyx_state[0]; __pyx_result.current_buffer_size = __pyx_state[1]; __pyx_result.position = __pyx_state[2] # <<<<<<<<<<<<<<
* if len(__pyx_state) > 3 and hasattr(__pyx_result, '__dict__'):
* __pyx_result.__dict__.update(__pyx_state[3])
*/
if (unlikely(__pyx_v___pyx_state == Py_None)) {
PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable");
__PYX_ERR(1, 12, __pyx_L1_error)
}
__pyx_t_1 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
if (!(likely(PyByteArray_CheckExact(__pyx_t_1))||((__pyx_t_1) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "bytearray", Py_TYPE(__pyx_t_1)->tp_name), 0))) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_GIVEREF(__pyx_t_1);
__Pyx_GOTREF(__pyx_v___pyx_result->buffer);
__Pyx_DECREF(__pyx_v___pyx_result->buffer);
__pyx_v___pyx_result->buffer = ((PyObject*)__pyx_t_1);
__pyx_t_1 = 0;
if (unlikely(__pyx_v___pyx_state == Py_None)) {
PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable");
__PYX_ERR(1, 12, __pyx_L1_error)
}
__pyx_t_1 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__pyx_t_2 = __Pyx_PyIndex_AsSsize_t(__pyx_t_1); if (unlikely((__pyx_t_2 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
__pyx_v___pyx_result->current_buffer_size = __pyx_t_2;
if (unlikely(__pyx_v___pyx_state == Py_None)) {
PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable");
__PYX_ERR(1, 12, __pyx_L1_error)
}
__pyx_t_1 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 2, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__pyx_t_2 = __Pyx_PyIndex_AsSsize_t(__pyx_t_1); if (unlikely((__pyx_t_2 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
__pyx_v___pyx_result->position = __pyx_t_2;
/* "(tree fragment)":13
* cdef __pyx_unpickle_BufferedReader__set_state(BufferedReader __pyx_result, tuple __pyx_state):
* __pyx_result.buffer = __pyx_state[0]; __pyx_result.current_buffer_size = __pyx_state[1]; __pyx_result.position = __pyx_state[2]
* if len(__pyx_state) > 3 and hasattr(__pyx_result, '__dict__'): # <<<<<<<<<<<<<<
* __pyx_result.__dict__.update(__pyx_state[3])
*/
if (unlikely(__pyx_v___pyx_state == Py_None)) {
PyErr_SetString(PyExc_TypeError, "object of type 'NoneType' has no len()");
__PYX_ERR(1, 13, __pyx_L1_error)
}
__pyx_t_2 = PyTuple_GET_SIZE(__pyx_v___pyx_state); if (unlikely(__pyx_t_2 == ((Py_ssize_t)-1))) __PYX_ERR(1, 13, __pyx_L1_error)
__pyx_t_4 = ((__pyx_t_2 > 3) != 0);
if (__pyx_t_4) {
} else {
__pyx_t_3 = __pyx_t_4;
goto __pyx_L4_bool_binop_done;
}
__pyx_t_4 = __Pyx_HasAttr(((PyObject *)__pyx_v___pyx_result), __pyx_n_s_dict); if (unlikely(__pyx_t_4 == ((int)-1))) __PYX_ERR(1, 13, __pyx_L1_error)
__pyx_t_5 = (__pyx_t_4 != 0);
__pyx_t_3 = __pyx_t_5;
__pyx_L4_bool_binop_done:;
if (__pyx_t_3) {
/* "(tree fragment)":14
* __pyx_result.buffer = __pyx_state[0]; __pyx_result.current_buffer_size = __pyx_state[1]; __pyx_result.position = __pyx_state[2]
* if len(__pyx_state) > 3 and hasattr(__pyx_result, '__dict__'):
* __pyx_result.__dict__.update(__pyx_state[3]) # <<<<<<<<<<<<<<
*/
__pyx_t_6 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v___pyx_result), __pyx_n_s_dict); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 14, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_6);
__pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_update); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 14, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_7);
__Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
if (unlikely(__pyx_v___pyx_state == Py_None)) {
PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable");
__PYX_ERR(1, 14, __pyx_L1_error)
}
__pyx_t_6 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 3, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 14, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_6);
__pyx_t_8 = NULL;
if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_7))) {
__pyx_t_8 = PyMethod_GET_SELF(__pyx_t_7);
if (likely(__pyx_t_8)) {
PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7);
__Pyx_INCREF(__pyx_t_8);
__Pyx_INCREF(function);
__Pyx_DECREF_SET(__pyx_t_7, function);
}
}
__pyx_t_1 = (__pyx_t_8) ? __Pyx_PyObject_Call2Args(__pyx_t_7, __pyx_t_8, __pyx_t_6) : __Pyx_PyObject_CallOneArg(__pyx_t_7, __pyx_t_6);
__Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0;
__Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 14, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
/* "(tree fragment)":13
* cdef __pyx_unpickle_BufferedReader__set_state(BufferedReader __pyx_result, tuple __pyx_state):
* __pyx_result.buffer = __pyx_state[0]; __pyx_result.current_buffer_size = __pyx_state[1]; __pyx_result.position = __pyx_state[2]
* if len(__pyx_state) > 3 and hasattr(__pyx_result, '__dict__'): # <<<<<<<<<<<<<<
* __pyx_result.__dict__.update(__pyx_state[3])
*/
}
/* "(tree fragment)":11
* __pyx_unpickle_BufferedReader__set_state(<BufferedReader> __pyx_result, __pyx_state)
* return __pyx_result
* cdef __pyx_unpickle_BufferedReader__set_state(BufferedReader __pyx_result, tuple __pyx_state): # <<<<<<<<<<<<<<
* __pyx_result.buffer = __pyx_state[0]; __pyx_result.current_buffer_size = __pyx_state[1]; __pyx_result.position = __pyx_state[2]
* if len(__pyx_state) > 3 and hasattr(__pyx_result, '__dict__'):
*/
/* function exit code */
__pyx_r = Py_None; __Pyx_INCREF(Py_None);
goto __pyx_L0;
__pyx_L1_error:;
__Pyx_XDECREF(__pyx_t_1);
__Pyx_XDECREF(__pyx_t_6);
__Pyx_XDECREF(__pyx_t_7);
__Pyx_XDECREF(__pyx_t_8);
__Pyx_AddTraceback("clickhouse_driver.bufferedreader.__pyx_unpickle_BufferedReader__set_state", __pyx_clineno, __pyx_lineno, __pyx_filename);
__pyx_r = 0;
__pyx_L0:;
__Pyx_XGIVEREF(__pyx_r);
__Pyx_RefNannyFinishContext();
return __pyx_r;
} | 939 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_f_17clickhouse_driver_14bufferedreader___pyx_unpickle_BufferedSocketReader__set_state | __pyx_f_17clickhouse_driver_14bufferedreader___pyx_unpickle_BufferedSocketReader__set_state( struct __pyx_obj_17clickhouse_driver_14bufferedreader_BufferedSocketReader * __pyx_v___pyx_result , PyObject * __pyx_v___pyx_state) | ['__pyx_v___pyx_result', '__pyx_v___pyx_state'] | static PyObject *__pyx_f_17clickhouse_driver_14bufferedreader___pyx_unpickle_BufferedSocketReader__set_state(struct __pyx_obj_17clickhouse_driver_14bufferedreader_BufferedSocketReader *__pyx_v___pyx_result, PyObject *__pyx_v___pyx_state) {
PyObject *__pyx_r = NULL;
__Pyx_RefNannyDeclarations
PyObject *__pyx_t_1 = NULL;
Py_ssize_t __pyx_t_2;
int __pyx_t_3;
int __pyx_t_4;
int __pyx_t_5;
PyObject *__pyx_t_6 = NULL;
PyObject *__pyx_t_7 = NULL;
PyObject *__pyx_t_8 = NULL;
__Pyx_RefNannySetupContext("__pyx_unpickle_BufferedSocketReader__set_state", 0);
/* "(tree fragment)":12
* return __pyx_result
* cdef __pyx_unpickle_BufferedSocketReader__set_state(BufferedSocketReader __pyx_result, tuple __pyx_state):
* __pyx_result.buffer = __pyx_state[0]; __pyx_result.current_buffer_size = __pyx_state[1]; __pyx_result.position = __pyx_state[2]; __pyx_result.sock = __pyx_state[3] # <<<<<<<<<<<<<<
* if len(__pyx_state) > 4 and hasattr(__pyx_result, '__dict__'):
* __pyx_result.__dict__.update(__pyx_state[4])
*/
if (unlikely(__pyx_v___pyx_state == Py_None)) {
PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable");
__PYX_ERR(1, 12, __pyx_L1_error)
}
__pyx_t_1 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
if (!(likely(PyByteArray_CheckExact(__pyx_t_1))||((__pyx_t_1) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "bytearray", Py_TYPE(__pyx_t_1)->tp_name), 0))) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_GIVEREF(__pyx_t_1);
__Pyx_GOTREF(__pyx_v___pyx_result->__pyx_base.buffer);
__Pyx_DECREF(__pyx_v___pyx_result->__pyx_base.buffer);
__pyx_v___pyx_result->__pyx_base.buffer = ((PyObject*)__pyx_t_1);
__pyx_t_1 = 0;
if (unlikely(__pyx_v___pyx_state == Py_None)) {
PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable");
__PYX_ERR(1, 12, __pyx_L1_error)
}
__pyx_t_1 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__pyx_t_2 = __Pyx_PyIndex_AsSsize_t(__pyx_t_1); if (unlikely((__pyx_t_2 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
__pyx_v___pyx_result->__pyx_base.current_buffer_size = __pyx_t_2;
if (unlikely(__pyx_v___pyx_state == Py_None)) {
PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable");
__PYX_ERR(1, 12, __pyx_L1_error)
}
__pyx_t_1 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 2, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__pyx_t_2 = __Pyx_PyIndex_AsSsize_t(__pyx_t_1); if (unlikely((__pyx_t_2 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
__pyx_v___pyx_result->__pyx_base.position = __pyx_t_2;
if (unlikely(__pyx_v___pyx_state == Py_None)) {
PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable");
__PYX_ERR(1, 12, __pyx_L1_error)
}
__pyx_t_1 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 3, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__Pyx_GIVEREF(__pyx_t_1);
__Pyx_GOTREF(__pyx_v___pyx_result->sock);
__Pyx_DECREF(__pyx_v___pyx_result->sock);
__pyx_v___pyx_result->sock = __pyx_t_1;
__pyx_t_1 = 0;
/* "(tree fragment)":13
* cdef __pyx_unpickle_BufferedSocketReader__set_state(BufferedSocketReader __pyx_result, tuple __pyx_state):
* __pyx_result.buffer = __pyx_state[0]; __pyx_result.current_buffer_size = __pyx_state[1]; __pyx_result.position = __pyx_state[2]; __pyx_result.sock = __pyx_state[3]
* if len(__pyx_state) > 4 and hasattr(__pyx_result, '__dict__'): # <<<<<<<<<<<<<<
* __pyx_result.__dict__.update(__pyx_state[4])
*/
if (unlikely(__pyx_v___pyx_state == Py_None)) {
PyErr_SetString(PyExc_TypeError, "object of type 'NoneType' has no len()");
__PYX_ERR(1, 13, __pyx_L1_error)
}
__pyx_t_2 = PyTuple_GET_SIZE(__pyx_v___pyx_state); if (unlikely(__pyx_t_2 == ((Py_ssize_t)-1))) __PYX_ERR(1, 13, __pyx_L1_error)
__pyx_t_4 = ((__pyx_t_2 > 4) != 0);
if (__pyx_t_4) {
} else {
__pyx_t_3 = __pyx_t_4;
goto __pyx_L4_bool_binop_done;
}
__pyx_t_4 = __Pyx_HasAttr(((PyObject *)__pyx_v___pyx_result), __pyx_n_s_dict); if (unlikely(__pyx_t_4 == ((int)-1))) __PYX_ERR(1, 13, __pyx_L1_error)
__pyx_t_5 = (__pyx_t_4 != 0);
__pyx_t_3 = __pyx_t_5;
__pyx_L4_bool_binop_done:;
if (__pyx_t_3) {
/* "(tree fragment)":14
* __pyx_result.buffer = __pyx_state[0]; __pyx_result.current_buffer_size = __pyx_state[1]; __pyx_result.position = __pyx_state[2]; __pyx_result.sock = __pyx_state[3]
* if len(__pyx_state) > 4 and hasattr(__pyx_result, '__dict__'):
* __pyx_result.__dict__.update(__pyx_state[4]) # <<<<<<<<<<<<<<
*/
__pyx_t_6 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v___pyx_result), __pyx_n_s_dict); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 14, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_6);
__pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_update); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 14, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_7);
__Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
if (unlikely(__pyx_v___pyx_state == Py_None)) {
PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable");
__PYX_ERR(1, 14, __pyx_L1_error)
}
__pyx_t_6 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 4, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 14, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_6);
__pyx_t_8 = NULL;
if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_7))) {
__pyx_t_8 = PyMethod_GET_SELF(__pyx_t_7);
if (likely(__pyx_t_8)) {
PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7);
__Pyx_INCREF(__pyx_t_8);
__Pyx_INCREF(function);
__Pyx_DECREF_SET(__pyx_t_7, function);
}
}
__pyx_t_1 = (__pyx_t_8) ? __Pyx_PyObject_Call2Args(__pyx_t_7, __pyx_t_8, __pyx_t_6) : __Pyx_PyObject_CallOneArg(__pyx_t_7, __pyx_t_6);
__Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0;
__Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 14, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
/* "(tree fragment)":13
* cdef __pyx_unpickle_BufferedSocketReader__set_state(BufferedSocketReader __pyx_result, tuple __pyx_state):
* __pyx_result.buffer = __pyx_state[0]; __pyx_result.current_buffer_size = __pyx_state[1]; __pyx_result.position = __pyx_state[2]; __pyx_result.sock = __pyx_state[3]
* if len(__pyx_state) > 4 and hasattr(__pyx_result, '__dict__'): # <<<<<<<<<<<<<<
* __pyx_result.__dict__.update(__pyx_state[4])
*/
}
/* "(tree fragment)":11
* __pyx_unpickle_BufferedSocketReader__set_state(<BufferedSocketReader> __pyx_result, __pyx_state)
* return __pyx_result
* cdef __pyx_unpickle_BufferedSocketReader__set_state(BufferedSocketReader __pyx_result, tuple __pyx_state): # <<<<<<<<<<<<<<
* __pyx_result.buffer = __pyx_state[0]; __pyx_result.current_buffer_size = __pyx_state[1]; __pyx_result.position = __pyx_state[2]; __pyx_result.sock = __pyx_state[3]
* if len(__pyx_state) > 4 and hasattr(__pyx_result, '__dict__'):
*/
/* function exit code */
__pyx_r = Py_None; __Pyx_INCREF(Py_None);
goto __pyx_L0;
__pyx_L1_error:;
__Pyx_XDECREF(__pyx_t_1);
__Pyx_XDECREF(__pyx_t_6);
__Pyx_XDECREF(__pyx_t_7);
__Pyx_XDECREF(__pyx_t_8);
__Pyx_AddTraceback("clickhouse_driver.bufferedreader.__pyx_unpickle_BufferedSocketReader__set_state", __pyx_clineno, __pyx_lineno, __pyx_filename);
__pyx_r = 0;
__pyx_L0:;
__Pyx_XGIVEREF(__pyx_r);
__Pyx_RefNannyFinishContext();
return __pyx_r;
} | 1046 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_f_17clickhouse_driver_14bufferedreader___pyx_unpickle_CompressedBufferedReader__set_state | __pyx_f_17clickhouse_driver_14bufferedreader___pyx_unpickle_CompressedBufferedReader__set_state( struct __pyx_obj_17clickhouse_driver_14bufferedreader_CompressedBufferedReader * __pyx_v___pyx_result , PyObject * __pyx_v___pyx_state) | ['__pyx_v___pyx_result', '__pyx_v___pyx_state'] | static PyObject *__pyx_f_17clickhouse_driver_14bufferedreader___pyx_unpickle_CompressedBufferedReader__set_state(struct __pyx_obj_17clickhouse_driver_14bufferedreader_CompressedBufferedReader *__pyx_v___pyx_result, PyObject *__pyx_v___pyx_state) {
PyObject *__pyx_r = NULL;
__Pyx_RefNannyDeclarations
PyObject *__pyx_t_1 = NULL;
Py_ssize_t __pyx_t_2;
int __pyx_t_3;
int __pyx_t_4;
int __pyx_t_5;
PyObject *__pyx_t_6 = NULL;
PyObject *__pyx_t_7 = NULL;
PyObject *__pyx_t_8 = NULL;
__Pyx_RefNannySetupContext("__pyx_unpickle_CompressedBufferedReader__set_state", 0);
/* "(tree fragment)":12
* return __pyx_result
* cdef __pyx_unpickle_CompressedBufferedReader__set_state(CompressedBufferedReader __pyx_result, tuple __pyx_state):
* __pyx_result.buffer = __pyx_state[0]; __pyx_result.current_buffer_size = __pyx_state[1]; __pyx_result.position = __pyx_state[2]; __pyx_result.read_block = __pyx_state[3] # <<<<<<<<<<<<<<
* if len(__pyx_state) > 4 and hasattr(__pyx_result, '__dict__'):
* __pyx_result.__dict__.update(__pyx_state[4])
*/
if (unlikely(__pyx_v___pyx_state == Py_None)) {
PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable");
__PYX_ERR(1, 12, __pyx_L1_error)
}
__pyx_t_1 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
if (!(likely(PyByteArray_CheckExact(__pyx_t_1))||((__pyx_t_1) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "bytearray", Py_TYPE(__pyx_t_1)->tp_name), 0))) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_GIVEREF(__pyx_t_1);
__Pyx_GOTREF(__pyx_v___pyx_result->__pyx_base.buffer);
__Pyx_DECREF(__pyx_v___pyx_result->__pyx_base.buffer);
__pyx_v___pyx_result->__pyx_base.buffer = ((PyObject*)__pyx_t_1);
__pyx_t_1 = 0;
if (unlikely(__pyx_v___pyx_state == Py_None)) {
PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable");
__PYX_ERR(1, 12, __pyx_L1_error)
}
__pyx_t_1 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__pyx_t_2 = __Pyx_PyIndex_AsSsize_t(__pyx_t_1); if (unlikely((__pyx_t_2 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
__pyx_v___pyx_result->__pyx_base.current_buffer_size = __pyx_t_2;
if (unlikely(__pyx_v___pyx_state == Py_None)) {
PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable");
__PYX_ERR(1, 12, __pyx_L1_error)
}
__pyx_t_1 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 2, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__pyx_t_2 = __Pyx_PyIndex_AsSsize_t(__pyx_t_1); if (unlikely((__pyx_t_2 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
__pyx_v___pyx_result->__pyx_base.position = __pyx_t_2;
if (unlikely(__pyx_v___pyx_state == Py_None)) {
PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable");
__PYX_ERR(1, 12, __pyx_L1_error)
}
__pyx_t_1 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 3, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__Pyx_GIVEREF(__pyx_t_1);
__Pyx_GOTREF(__pyx_v___pyx_result->read_block);
__Pyx_DECREF(__pyx_v___pyx_result->read_block);
__pyx_v___pyx_result->read_block = __pyx_t_1;
__pyx_t_1 = 0;
/* "(tree fragment)":13
* cdef __pyx_unpickle_CompressedBufferedReader__set_state(CompressedBufferedReader __pyx_result, tuple __pyx_state):
* __pyx_result.buffer = __pyx_state[0]; __pyx_result.current_buffer_size = __pyx_state[1]; __pyx_result.position = __pyx_state[2]; __pyx_result.read_block = __pyx_state[3]
* if len(__pyx_state) > 4 and hasattr(__pyx_result, '__dict__'): # <<<<<<<<<<<<<<
* __pyx_result.__dict__.update(__pyx_state[4])
*/
if (unlikely(__pyx_v___pyx_state == Py_None)) {
PyErr_SetString(PyExc_TypeError, "object of type 'NoneType' has no len()");
__PYX_ERR(1, 13, __pyx_L1_error)
}
__pyx_t_2 = PyTuple_GET_SIZE(__pyx_v___pyx_state); if (unlikely(__pyx_t_2 == ((Py_ssize_t)-1))) __PYX_ERR(1, 13, __pyx_L1_error)
__pyx_t_4 = ((__pyx_t_2 > 4) != 0);
if (__pyx_t_4) {
} else {
__pyx_t_3 = __pyx_t_4;
goto __pyx_L4_bool_binop_done;
}
__pyx_t_4 = __Pyx_HasAttr(((PyObject *)__pyx_v___pyx_result), __pyx_n_s_dict); if (unlikely(__pyx_t_4 == ((int)-1))) __PYX_ERR(1, 13, __pyx_L1_error)
__pyx_t_5 = (__pyx_t_4 != 0);
__pyx_t_3 = __pyx_t_5;
__pyx_L4_bool_binop_done:;
if (__pyx_t_3) {
/* "(tree fragment)":14
* __pyx_result.buffer = __pyx_state[0]; __pyx_result.current_buffer_size = __pyx_state[1]; __pyx_result.position = __pyx_state[2]; __pyx_result.read_block = __pyx_state[3]
* if len(__pyx_state) > 4 and hasattr(__pyx_result, '__dict__'):
* __pyx_result.__dict__.update(__pyx_state[4]) # <<<<<<<<<<<<<<
*/
__pyx_t_6 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v___pyx_result), __pyx_n_s_dict); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 14, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_6);
__pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_update); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 14, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_7);
__Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
if (unlikely(__pyx_v___pyx_state == Py_None)) {
PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable");
__PYX_ERR(1, 14, __pyx_L1_error)
}
__pyx_t_6 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 4, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 14, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_6);
__pyx_t_8 = NULL;
if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_7))) {
__pyx_t_8 = PyMethod_GET_SELF(__pyx_t_7);
if (likely(__pyx_t_8)) {
PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7);
__Pyx_INCREF(__pyx_t_8);
__Pyx_INCREF(function);
__Pyx_DECREF_SET(__pyx_t_7, function);
}
}
__pyx_t_1 = (__pyx_t_8) ? __Pyx_PyObject_Call2Args(__pyx_t_7, __pyx_t_8, __pyx_t_6) : __Pyx_PyObject_CallOneArg(__pyx_t_7, __pyx_t_6);
__Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0;
__Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 14, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
/* "(tree fragment)":13
* cdef __pyx_unpickle_CompressedBufferedReader__set_state(CompressedBufferedReader __pyx_result, tuple __pyx_state):
* __pyx_result.buffer = __pyx_state[0]; __pyx_result.current_buffer_size = __pyx_state[1]; __pyx_result.position = __pyx_state[2]; __pyx_result.read_block = __pyx_state[3]
* if len(__pyx_state) > 4 and hasattr(__pyx_result, '__dict__'): # <<<<<<<<<<<<<<
* __pyx_result.__dict__.update(__pyx_state[4])
*/
}
/* "(tree fragment)":11
* __pyx_unpickle_CompressedBufferedReader__set_state(<CompressedBufferedReader> __pyx_result, __pyx_state)
* return __pyx_result
* cdef __pyx_unpickle_CompressedBufferedReader__set_state(CompressedBufferedReader __pyx_result, tuple __pyx_state): # <<<<<<<<<<<<<<
* __pyx_result.buffer = __pyx_state[0]; __pyx_result.current_buffer_size = __pyx_state[1]; __pyx_result.position = __pyx_state[2]; __pyx_result.read_block = __pyx_state[3]
* if len(__pyx_state) > 4 and hasattr(__pyx_result, '__dict__'):
*/
/* function exit code */
__pyx_r = Py_None; __Pyx_INCREF(Py_None);
goto __pyx_L0;
__pyx_L1_error:;
__Pyx_XDECREF(__pyx_t_1);
__Pyx_XDECREF(__pyx_t_6);
__Pyx_XDECREF(__pyx_t_7);
__Pyx_XDECREF(__pyx_t_8);
__Pyx_AddTraceback("clickhouse_driver.bufferedreader.__pyx_unpickle_CompressedBufferedReader__set_state", __pyx_clineno, __pyx_lineno, __pyx_filename);
__pyx_r = 0;
__pyx_L0:;
__Pyx_XGIVEREF(__pyx_r);
__Pyx_RefNannyFinishContext();
return __pyx_r;
} | 1046 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_pf_17clickhouse_driver_14bufferedreader_14BufferedReader_10__reduce_cython__ | __pyx_pf_17clickhouse_driver_14bufferedreader_14BufferedReader_10__reduce_cython__( struct __pyx_obj_17clickhouse_driver_14bufferedreader_BufferedReader * __pyx_v_self) | ['__pyx_v_self'] | static PyObject *__pyx_pf_17clickhouse_driver_14bufferedreader_14BufferedReader_10__reduce_cython__(struct __pyx_obj_17clickhouse_driver_14bufferedreader_BufferedReader *__pyx_v_self) {
PyObject *__pyx_v_state = 0;
PyObject *__pyx_v__dict = 0;
int __pyx_v_use_setstate;
PyObject *__pyx_r = NULL;
__Pyx_RefNannyDeclarations
PyObject *__pyx_t_1 = NULL;
PyObject *__pyx_t_2 = NULL;
PyObject *__pyx_t_3 = NULL;
int __pyx_t_4;
int __pyx_t_5;
__Pyx_RefNannySetupContext("__reduce_cython__", 0);
/* "(tree fragment)":5
* cdef object _dict
* cdef bint use_setstate
* state = (self.buffer, self.current_buffer_size, self.position) # <<<<<<<<<<<<<<
* _dict = getattr(self, '__dict__', None)
* if _dict is not None:
*/
__pyx_t_1 = PyInt_FromSsize_t(__pyx_v_self->current_buffer_size); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__pyx_t_2 = PyInt_FromSsize_t(__pyx_v_self->position); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_INCREF(__pyx_v_self->buffer);
__Pyx_GIVEREF(__pyx_v_self->buffer);
PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_self->buffer);
__Pyx_GIVEREF(__pyx_t_1);
PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1);
__Pyx_GIVEREF(__pyx_t_2);
PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2);
__pyx_t_1 = 0;
__pyx_t_2 = 0;
__pyx_v_state = ((PyObject*)__pyx_t_3);
__pyx_t_3 = 0;
/* "(tree fragment)":6
* cdef bint use_setstate
* state = (self.buffer, self.current_buffer_size, self.position)
* _dict = getattr(self, '__dict__', None) # <<<<<<<<<<<<<<
* if _dict is not None:
* state += (_dict,)
*/
__pyx_t_3 = __Pyx_GetAttr3(((PyObject *)__pyx_v_self), __pyx_n_s_dict, Py_None); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 6, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__pyx_v__dict = __pyx_t_3;
__pyx_t_3 = 0;
/* "(tree fragment)":7
* state = (self.buffer, self.current_buffer_size, self.position)
* _dict = getattr(self, '__dict__', None)
* if _dict is not None: # <<<<<<<<<<<<<<
* state += (_dict,)
* use_setstate = True
*/
__pyx_t_4 = (__pyx_v__dict != Py_None);
__pyx_t_5 = (__pyx_t_4 != 0);
if (__pyx_t_5) {
/* "(tree fragment)":8
* _dict = getattr(self, '__dict__', None)
* if _dict is not None:
* state += (_dict,) # <<<<<<<<<<<<<<
* use_setstate = True
* else:
*/
__pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 8, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_INCREF(__pyx_v__dict);
__Pyx_GIVEREF(__pyx_v__dict);
PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v__dict);
__pyx_t_2 = PyNumber_InPlaceAdd(__pyx_v_state, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 8, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
__Pyx_DECREF_SET(__pyx_v_state, ((PyObject*)__pyx_t_2));
__pyx_t_2 = 0;
/* "(tree fragment)":9
* if _dict is not None:
* state += (_dict,)
* use_setstate = True # <<<<<<<<<<<<<<
* else:
* use_setstate = self.buffer is not None
*/
__pyx_v_use_setstate = 1;
/* "(tree fragment)":7
* state = (self.buffer, self.current_buffer_size, self.position)
* _dict = getattr(self, '__dict__', None)
* if _dict is not None: # <<<<<<<<<<<<<<
* state += (_dict,)
* use_setstate = True
*/
goto __pyx_L3;
}
/* "(tree fragment)":11
* use_setstate = True
* else:
* use_setstate = self.buffer is not None # <<<<<<<<<<<<<<
* if use_setstate:
* return __pyx_unpickle_BufferedReader, (type(self), 0x2a8a945, None), state
*/
/*else*/ {
__pyx_t_5 = (__pyx_v_self->buffer != ((PyObject*)Py_None));
__pyx_v_use_setstate = __pyx_t_5;
}
__pyx_L3:;
/* "(tree fragment)":12
* else:
* use_setstate = self.buffer is not None
* if use_setstate: # <<<<<<<<<<<<<<
* return __pyx_unpickle_BufferedReader, (type(self), 0x2a8a945, None), state
* else:
*/
__pyx_t_5 = (__pyx_v_use_setstate != 0);
if (__pyx_t_5) {
/* "(tree fragment)":13
* use_setstate = self.buffer is not None
* if use_setstate:
* return __pyx_unpickle_BufferedReader, (type(self), 0x2a8a945, None), state # <<<<<<<<<<<<<<
* else:
* return __pyx_unpickle_BufferedReader, (type(self), 0x2a8a945, state)
*/
__Pyx_XDECREF(__pyx_r);
__Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_pyx_unpickle_BufferedReader); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 13, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 13, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_INCREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
__Pyx_GIVEREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
__Pyx_INCREF(__pyx_int_44607813);
__Pyx_GIVEREF(__pyx_int_44607813);
PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_int_44607813);
__Pyx_INCREF(Py_None);
__Pyx_GIVEREF(Py_None);
PyTuple_SET_ITEM(__pyx_t_3, 2, Py_None);
__pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 13, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__Pyx_GIVEREF(__pyx_t_2);
PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_2);
__Pyx_GIVEREF(__pyx_t_3);
PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_t_3);
__Pyx_INCREF(__pyx_v_state);
__Pyx_GIVEREF(__pyx_v_state);
PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_v_state);
__pyx_t_2 = 0;
__pyx_t_3 = 0;
__pyx_r = __pyx_t_1;
__pyx_t_1 = 0;
goto __pyx_L0;
/* "(tree fragment)":12
* else:
* use_setstate = self.buffer is not None
* if use_setstate: # <<<<<<<<<<<<<<
* return __pyx_unpickle_BufferedReader, (type(self), 0x2a8a945, None), state
* else:
*/
}
/* "(tree fragment)":15
* return __pyx_unpickle_BufferedReader, (type(self), 0x2a8a945, None), state
* else:
* return __pyx_unpickle_BufferedReader, (type(self), 0x2a8a945, state) # <<<<<<<<<<<<<<
* def __setstate_cython__(self, __pyx_state):
* __pyx_unpickle_BufferedReader__set_state(self, __pyx_state)
*/
/*else*/ {
__Pyx_XDECREF(__pyx_r);
__Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_pyx_unpickle_BufferedReader); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 15, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 15, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_INCREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
__Pyx_GIVEREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
__Pyx_INCREF(__pyx_int_44607813);
__Pyx_GIVEREF(__pyx_int_44607813);
PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_int_44607813);
__Pyx_INCREF(__pyx_v_state);
__Pyx_GIVEREF(__pyx_v_state);
PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_v_state);
__pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 15, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__Pyx_GIVEREF(__pyx_t_1);
PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_1);
__Pyx_GIVEREF(__pyx_t_3);
PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_3);
__pyx_t_1 = 0;
__pyx_t_3 = 0;
__pyx_r = __pyx_t_2;
__pyx_t_2 = 0;
goto __pyx_L0;
}
/* "(tree fragment)":1
* def __reduce_cython__(self): # <<<<<<<<<<<<<<
* cdef tuple state
* cdef object _dict
*/
/* function exit code */
__pyx_L1_error:;
__Pyx_XDECREF(__pyx_t_1);
__Pyx_XDECREF(__pyx_t_2);
__Pyx_XDECREF(__pyx_t_3);
__Pyx_AddTraceback("clickhouse_driver.bufferedreader.BufferedReader.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
__pyx_r = NULL;
__pyx_L0:;
__Pyx_XDECREF(__pyx_v_state);
__Pyx_XDECREF(__pyx_v__dict);
__Pyx_XGIVEREF(__pyx_r);
__Pyx_RefNannyFinishContext();
return __pyx_r;
} | 997 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_pf_17clickhouse_driver_14bufferedreader_14BufferedReader_19current_buffer_size_2__set__ | __pyx_pf_17clickhouse_driver_14bufferedreader_14BufferedReader_19current_buffer_size_2__set__( struct __pyx_obj_17clickhouse_driver_14bufferedreader_BufferedReader * __pyx_v_self , PyObject * __pyx_v_value) | ['__pyx_v_self', '__pyx_v_value'] | static int __pyx_pf_17clickhouse_driver_14bufferedreader_14BufferedReader_19current_buffer_size_2__set__(struct __pyx_obj_17clickhouse_driver_14bufferedreader_BufferedReader *__pyx_v_self, PyObject *__pyx_v_value) {
int __pyx_r;
__Pyx_RefNannyDeclarations
Py_ssize_t __pyx_t_1;
__Pyx_RefNannySetupContext("__set__", 0);
__pyx_t_1 = __Pyx_PyIndex_AsSsize_t(__pyx_v_value); if (unlikely((__pyx_t_1 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 11, __pyx_L1_error)
__pyx_v_self->current_buffer_size = __pyx_t_1;
/* function exit code */
__pyx_r = 0;
goto __pyx_L0;
__pyx_L1_error:;
__Pyx_AddTraceback("clickhouse_driver.bufferedreader.BufferedReader.current_buffer_size.__set__", __pyx_clineno, __pyx_lineno, __pyx_filename);
__pyx_r = -1;
__pyx_L0:;
__Pyx_RefNannyFinishContext();
return __pyx_r;
} | 103 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_pf_17clickhouse_driver_14bufferedreader_14BufferedReader_19current_buffer_size___get__ | __pyx_pf_17clickhouse_driver_14bufferedreader_14BufferedReader_19current_buffer_size___get__( struct __pyx_obj_17clickhouse_driver_14bufferedreader_BufferedReader * __pyx_v_self) | ['__pyx_v_self'] | static PyObject *__pyx_pf_17clickhouse_driver_14bufferedreader_14BufferedReader_19current_buffer_size___get__(struct __pyx_obj_17clickhouse_driver_14bufferedreader_BufferedReader *__pyx_v_self) {
PyObject *__pyx_r = NULL;
__Pyx_RefNannyDeclarations
PyObject *__pyx_t_1 = NULL;
__Pyx_RefNannySetupContext("__get__", 0);
__Pyx_XDECREF(__pyx_r);
__pyx_t_1 = PyInt_FromSsize_t(__pyx_v_self->current_buffer_size); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 11, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__pyx_r = __pyx_t_1;
__pyx_t_1 = 0;
goto __pyx_L0;
/* function exit code */
__pyx_L1_error:;
__Pyx_XDECREF(__pyx_t_1);
__Pyx_AddTraceback("clickhouse_driver.bufferedreader.BufferedReader.current_buffer_size.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename);
__pyx_r = NULL;
__pyx_L0:;
__Pyx_XGIVEREF(__pyx_r);
__Pyx_RefNannyFinishContext();
return __pyx_r;
} | 113 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_pf_17clickhouse_driver_14bufferedreader_14BufferedReader_2read_into_buffer | __pyx_pf_17clickhouse_driver_14bufferedreader_14BufferedReader_2read_into_buffer( CYTHON_UNUSED struct __pyx_obj_17clickhouse_driver_14bufferedreader_BufferedReader * __pyx_v_self) | ['__pyx_v_self'] | static PyObject *__pyx_pf_17clickhouse_driver_14bufferedreader_14BufferedReader_2read_into_buffer(CYTHON_UNUSED struct __pyx_obj_17clickhouse_driver_14bufferedreader_BufferedReader *__pyx_v_self) {
PyObject *__pyx_r = NULL;
__Pyx_RefNannyDeclarations
__Pyx_RefNannySetupContext("read_into_buffer", 0);
/* "clickhouse_driver/bufferedreader.pyx":23
*
* def read_into_buffer(self):
* raise NotImplementedError # <<<<<<<<<<<<<<
*
* def read(self, Py_ssize_t unread):
*/
__Pyx_Raise(__pyx_builtin_NotImplementedError, 0, 0, 0);
__PYX_ERR(0, 23, __pyx_L1_error)
/* "clickhouse_driver/bufferedreader.pyx":22
* super(BufferedReader, self).__init__()
*
* def read_into_buffer(self): # <<<<<<<<<<<<<<
* raise NotImplementedError
*
*/
/* function exit code */
__pyx_L1_error:;
__Pyx_AddTraceback("clickhouse_driver.bufferedreader.BufferedReader.read_into_buffer", __pyx_clineno, __pyx_lineno, __pyx_filename);
__pyx_r = NULL;
__Pyx_XGIVEREF(__pyx_r);
__Pyx_RefNannyFinishContext();
return __pyx_r;
} | 73 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_pf_17clickhouse_driver_14bufferedreader_14BufferedReader_4read | __pyx_pf_17clickhouse_driver_14bufferedreader_14BufferedReader_4read( struct __pyx_obj_17clickhouse_driver_14bufferedreader_BufferedReader * __pyx_v_self , Py_ssize_t __pyx_v_unread) | ['__pyx_v_self', '__pyx_v_unread'] | static PyObject *__pyx_pf_17clickhouse_driver_14bufferedreader_14BufferedReader_4read(struct __pyx_obj_17clickhouse_driver_14bufferedreader_BufferedReader *__pyx_v_self, Py_ssize_t __pyx_v_unread) {
Py_ssize_t __pyx_v_next_position;
Py_ssize_t __pyx_v_t;
char *__pyx_v_buffer_ptr;
Py_ssize_t __pyx_v_read_bytes;
PyObject *__pyx_v_rv = NULL;
PyObject *__pyx_r = NULL;
__Pyx_RefNannyDeclarations
int __pyx_t_1;
Py_ssize_t __pyx_t_2;
PyObject *__pyx_t_3 = NULL;
PyObject *__pyx_t_4 = NULL;
PyObject *__pyx_t_5 = NULL;
Py_ssize_t __pyx_t_6;
Py_ssize_t __pyx_t_7;
__Pyx_RefNannySetupContext("read", 0);
/* "clickhouse_driver/bufferedreader.pyx":28
* # When the buffer is large enough bytes read are almost
* # always hit the buffer.
* cdef Py_ssize_t next_position = unread + self.position # <<<<<<<<<<<<<<
* if next_position < self.current_buffer_size:
* t = self.position
*/
__pyx_v_next_position = (__pyx_v_unread + __pyx_v_self->position);
/* "clickhouse_driver/bufferedreader.pyx":29
* # always hit the buffer.
* cdef Py_ssize_t next_position = unread + self.position
* if next_position < self.current_buffer_size: # <<<<<<<<<<<<<<
* t = self.position
* self.position = next_position
*/
__pyx_t_1 = ((__pyx_v_next_position < __pyx_v_self->current_buffer_size) != 0);
if (__pyx_t_1) {
/* "clickhouse_driver/bufferedreader.pyx":30
* cdef Py_ssize_t next_position = unread + self.position
* if next_position < self.current_buffer_size:
* t = self.position # <<<<<<<<<<<<<<
* self.position = next_position
* return bytes(self.buffer[t:self.position])
*/
__pyx_t_2 = __pyx_v_self->position;
__pyx_v_t = __pyx_t_2;
/* "clickhouse_driver/bufferedreader.pyx":31
* if next_position < self.current_buffer_size:
* t = self.position
* self.position = next_position # <<<<<<<<<<<<<<
* return bytes(self.buffer[t:self.position])
*
*/
__pyx_v_self->position = __pyx_v_next_position;
/* "clickhouse_driver/bufferedreader.pyx":32
* t = self.position
* self.position = next_position
* return bytes(self.buffer[t:self.position]) # <<<<<<<<<<<<<<
*
* cdef char* buffer_ptr = PyByteArray_AsString(self.buffer)
*/
__Pyx_XDECREF(__pyx_r);
if (unlikely(__pyx_v_self->buffer == Py_None)) {
PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable");
__PYX_ERR(0, 32, __pyx_L1_error)
}
__pyx_t_3 = PySequence_GetSlice(__pyx_v_self->buffer, __pyx_v_t, __pyx_v_self->position); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 32, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__pyx_t_4 = __Pyx_PyObject_CallOneArg(((PyObject *)(&PyBytes_Type)), __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 32, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
__Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
__pyx_r = __pyx_t_4;
__pyx_t_4 = 0;
goto __pyx_L0;
/* "clickhouse_driver/bufferedreader.pyx":29
* # always hit the buffer.
* cdef Py_ssize_t next_position = unread + self.position
* if next_position < self.current_buffer_size: # <<<<<<<<<<<<<<
* t = self.position
* self.position = next_position
*/
}
/* "clickhouse_driver/bufferedreader.pyx":34
* return bytes(self.buffer[t:self.position])
*
* cdef char* buffer_ptr = PyByteArray_AsString(self.buffer) # <<<<<<<<<<<<<<
* cdef Py_ssize_t read_bytes
* rv = bytes()
*/
__pyx_t_4 = __pyx_v_self->buffer;
__Pyx_INCREF(__pyx_t_4);
__pyx_v_buffer_ptr = PyByteArray_AsString(__pyx_t_4);
__Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
/* "clickhouse_driver/bufferedreader.pyx":36
* cdef char* buffer_ptr = PyByteArray_AsString(self.buffer)
* cdef Py_ssize_t read_bytes
* rv = bytes() # <<<<<<<<<<<<<<
*
* while unread > 0:
*/
__pyx_t_4 = __Pyx_PyObject_CallNoArg(((PyObject *)(&PyBytes_Type))); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 36, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
__pyx_v_rv = ((PyObject*)__pyx_t_4);
__pyx_t_4 = 0;
/* "clickhouse_driver/bufferedreader.pyx":38
* rv = bytes()
*
* while unread > 0: # <<<<<<<<<<<<<<
* if self.position == self.current_buffer_size:
* self.read_into_buffer()
*/
while (1) {
__pyx_t_1 = ((__pyx_v_unread > 0) != 0);
if (!__pyx_t_1) break;
/* "clickhouse_driver/bufferedreader.pyx":39
*
* while unread > 0:
* if self.position == self.current_buffer_size: # <<<<<<<<<<<<<<
* self.read_into_buffer()
* buffer_ptr = PyByteArray_AsString(self.buffer)
*/
__pyx_t_1 = ((__pyx_v_self->position == __pyx_v_self->current_buffer_size) != 0);
if (__pyx_t_1) {
/* "clickhouse_driver/bufferedreader.pyx":40
* while unread > 0:
* if self.position == self.current_buffer_size:
* self.read_into_buffer() # <<<<<<<<<<<<<<
* buffer_ptr = PyByteArray_AsString(self.buffer)
* self.position = 0
*/
__pyx_t_3 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_read_into_buffer); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 40, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__pyx_t_5 = NULL;
if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) {
__pyx_t_5 = PyMethod_GET_SELF(__pyx_t_3);
if (likely(__pyx_t_5)) {
PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);
__Pyx_INCREF(__pyx_t_5);
__Pyx_INCREF(function);
__Pyx_DECREF_SET(__pyx_t_3, function);
}
}
__pyx_t_4 = (__pyx_t_5) ? __Pyx_PyObject_CallOneArg(__pyx_t_3, __pyx_t_5) : __Pyx_PyObject_CallNoArg(__pyx_t_3);
__Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 40, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
__Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
__Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
/* "clickhouse_driver/bufferedreader.pyx":41
* if self.position == self.current_buffer_size:
* self.read_into_buffer()
* buffer_ptr = PyByteArray_AsString(self.buffer) # <<<<<<<<<<<<<<
* self.position = 0
*
*/
__pyx_t_4 = __pyx_v_self->buffer;
__Pyx_INCREF(__pyx_t_4);
__pyx_v_buffer_ptr = PyByteArray_AsString(__pyx_t_4);
__Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
/* "clickhouse_driver/bufferedreader.pyx":42
* self.read_into_buffer()
* buffer_ptr = PyByteArray_AsString(self.buffer)
* self.position = 0 # <<<<<<<<<<<<<<
*
* read_bytes = min(unread, self.current_buffer_size - self.position)
*/
__pyx_v_self->position = 0;
/* "clickhouse_driver/bufferedreader.pyx":39
*
* while unread > 0:
* if self.position == self.current_buffer_size: # <<<<<<<<<<<<<<
* self.read_into_buffer()
* buffer_ptr = PyByteArray_AsString(self.buffer)
*/
}
/* "clickhouse_driver/bufferedreader.pyx":44
* self.position = 0
*
* read_bytes = min(unread, self.current_buffer_size - self.position) # <<<<<<<<<<<<<<
* rv += PyBytes_FromStringAndSize(
* &buffer_ptr[self.position], read_bytes
*/
__pyx_t_2 = (__pyx_v_self->current_buffer_size - __pyx_v_self->position);
__pyx_t_6 = __pyx_v_unread;
if (((__pyx_t_2 < __pyx_t_6) != 0)) {
__pyx_t_7 = __pyx_t_2;
} else {
__pyx_t_7 = __pyx_t_6;
}
__pyx_v_read_bytes = __pyx_t_7;
/* "clickhouse_driver/bufferedreader.pyx":45
*
* read_bytes = min(unread, self.current_buffer_size - self.position)
* rv += PyBytes_FromStringAndSize( # <<<<<<<<<<<<<<
* &buffer_ptr[self.position], read_bytes
* )
*/
__pyx_t_4 = PyBytes_FromStringAndSize((&(__pyx_v_buffer_ptr[__pyx_v_self->position])), __pyx_v_read_bytes); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 45, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
__pyx_t_3 = PyNumber_InPlaceAdd(__pyx_v_rv, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 45, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
__Pyx_DECREF_SET(__pyx_v_rv, ((PyObject*)__pyx_t_3));
__pyx_t_3 = 0;
/* "clickhouse_driver/bufferedreader.pyx":48
* &buffer_ptr[self.position], read_bytes
* )
* self.position += read_bytes # <<<<<<<<<<<<<<
* unread -= read_bytes
*
*/
__pyx_v_self->position = (__pyx_v_self->position + __pyx_v_read_bytes);
/* "clickhouse_driver/bufferedreader.pyx":49
* )
* self.position += read_bytes
* unread -= read_bytes # <<<<<<<<<<<<<<
*
* return rv
*/
__pyx_v_unread = (__pyx_v_unread - __pyx_v_read_bytes);
}
/* "clickhouse_driver/bufferedreader.pyx":51
* unread -= read_bytes
*
* return rv # <<<<<<<<<<<<<<
*
* def read_one(self):
*/
__Pyx_XDECREF(__pyx_r);
__Pyx_INCREF(__pyx_v_rv);
__pyx_r = __pyx_v_rv;
goto __pyx_L0;
/* "clickhouse_driver/bufferedreader.pyx":25
* raise NotImplementedError
*
* def read(self, Py_ssize_t unread): # <<<<<<<<<<<<<<
* # When the buffer is large enough bytes read are almost
* # always hit the buffer.
*/
/* function exit code */
__pyx_L1_error:;
__Pyx_XDECREF(__pyx_t_3);
__Pyx_XDECREF(__pyx_t_4);
__Pyx_XDECREF(__pyx_t_5);
__Pyx_AddTraceback("clickhouse_driver.bufferedreader.BufferedReader.read", __pyx_clineno, __pyx_lineno, __pyx_filename);
__pyx_r = NULL;
__pyx_L0:;
__Pyx_XDECREF(__pyx_v_rv);
__Pyx_XGIVEREF(__pyx_r);
__Pyx_RefNannyFinishContext();
return __pyx_r;
} | 800 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_pf_17clickhouse_driver_14bufferedreader_14BufferedReader_6read_one | __pyx_pf_17clickhouse_driver_14bufferedreader_14BufferedReader_6read_one( struct __pyx_obj_17clickhouse_driver_14bufferedreader_BufferedReader * __pyx_v_self) | ['__pyx_v_self'] | static PyObject *__pyx_pf_17clickhouse_driver_14bufferedreader_14BufferedReader_6read_one(struct __pyx_obj_17clickhouse_driver_14bufferedreader_BufferedReader *__pyx_v_self) {
unsigned char __pyx_v_rv;
PyObject *__pyx_r = NULL;
__Pyx_RefNannyDeclarations
int __pyx_t_1;
PyObject *__pyx_t_2 = NULL;
PyObject *__pyx_t_3 = NULL;
PyObject *__pyx_t_4 = NULL;
int __pyx_t_5;
__Pyx_RefNannySetupContext("read_one", 0);
/* "clickhouse_driver/bufferedreader.pyx":54
*
* def read_one(self):
* if self.position == self.current_buffer_size: # <<<<<<<<<<<<<<
* self.read_into_buffer()
* self.position = 0
*/
__pyx_t_1 = ((__pyx_v_self->position == __pyx_v_self->current_buffer_size) != 0);
if (__pyx_t_1) {
/* "clickhouse_driver/bufferedreader.pyx":55
* def read_one(self):
* if self.position == self.current_buffer_size:
* self.read_into_buffer() # <<<<<<<<<<<<<<
* self.position = 0
*
*/
__pyx_t_3 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_read_into_buffer); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 55, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__pyx_t_4 = NULL;
if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) {
__pyx_t_4 = PyMethod_GET_SELF(__pyx_t_3);
if (likely(__pyx_t_4)) {
PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);
__Pyx_INCREF(__pyx_t_4);
__Pyx_INCREF(function);
__Pyx_DECREF_SET(__pyx_t_3, function);
}
}
__pyx_t_2 = (__pyx_t_4) ? __Pyx_PyObject_CallOneArg(__pyx_t_3, __pyx_t_4) : __Pyx_PyObject_CallNoArg(__pyx_t_3);
__Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 55, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
/* "clickhouse_driver/bufferedreader.pyx":56
* if self.position == self.current_buffer_size:
* self.read_into_buffer()
* self.position = 0 # <<<<<<<<<<<<<<
*
* rv = self.buffer[self.position]
*/
__pyx_v_self->position = 0;
/* "clickhouse_driver/bufferedreader.pyx":54
*
* def read_one(self):
* if self.position == self.current_buffer_size: # <<<<<<<<<<<<<<
* self.read_into_buffer()
* self.position = 0
*/
}
/* "clickhouse_driver/bufferedreader.pyx":58
* self.position = 0
*
* rv = self.buffer[self.position] # <<<<<<<<<<<<<<
* self.position += 1
* return rv
*/
__pyx_t_5 = __Pyx_GetItemInt_ByteArray(__pyx_v_self->buffer, __pyx_v_self->position, Py_ssize_t, 1, PyInt_FromSsize_t, 0, 1, 1); if (unlikely(__pyx_t_5 == -1)) __PYX_ERR(0, 58, __pyx_L1_error)
__pyx_v_rv = __pyx_t_5;
/* "clickhouse_driver/bufferedreader.pyx":59
*
* rv = self.buffer[self.position]
* self.position += 1 # <<<<<<<<<<<<<<
* return rv
*
*/
__pyx_v_self->position = (__pyx_v_self->position + 1);
/* "clickhouse_driver/bufferedreader.pyx":60
* rv = self.buffer[self.position]
* self.position += 1
* return rv # <<<<<<<<<<<<<<
*
* def read_strings(self, Py_ssize_t n_items, encoding=None):
*/
__Pyx_XDECREF(__pyx_r);
__pyx_t_2 = __Pyx_PyInt_From_unsigned_char(__pyx_v_rv); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 60, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__pyx_r = __pyx_t_2;
__pyx_t_2 = 0;
goto __pyx_L0;
/* "clickhouse_driver/bufferedreader.pyx":53
* return rv
*
* def read_one(self): # <<<<<<<<<<<<<<
* if self.position == self.current_buffer_size:
* self.read_into_buffer()
*/
/* function exit code */
__pyx_L1_error:;
__Pyx_XDECREF(__pyx_t_2);
__Pyx_XDECREF(__pyx_t_3);
__Pyx_XDECREF(__pyx_t_4);
__Pyx_AddTraceback("clickhouse_driver.bufferedreader.BufferedReader.read_one", __pyx_clineno, __pyx_lineno, __pyx_filename);
__pyx_r = NULL;
__pyx_L0:;
__Pyx_XGIVEREF(__pyx_r);
__Pyx_RefNannyFinishContext();
return __pyx_r;
} | 392 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_pf_17clickhouse_driver_14bufferedreader_14BufferedReader_8position_2__set__ | __pyx_pf_17clickhouse_driver_14bufferedreader_14BufferedReader_8position_2__set__( struct __pyx_obj_17clickhouse_driver_14bufferedreader_BufferedReader * __pyx_v_self , PyObject * __pyx_v_value) | ['__pyx_v_self', '__pyx_v_value'] | static int __pyx_pf_17clickhouse_driver_14bufferedreader_14BufferedReader_8position_2__set__(struct __pyx_obj_17clickhouse_driver_14bufferedreader_BufferedReader *__pyx_v_self, PyObject *__pyx_v_value) {
int __pyx_r;
__Pyx_RefNannyDeclarations
Py_ssize_t __pyx_t_1;
__Pyx_RefNannySetupContext("__set__", 0);
__pyx_t_1 = __Pyx_PyIndex_AsSsize_t(__pyx_v_value); if (unlikely((__pyx_t_1 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 11, __pyx_L1_error)
__pyx_v_self->position = __pyx_t_1;
/* function exit code */
__pyx_r = 0;
goto __pyx_L0;
__pyx_L1_error:;
__Pyx_AddTraceback("clickhouse_driver.bufferedreader.BufferedReader.position.__set__", __pyx_clineno, __pyx_lineno, __pyx_filename);
__pyx_r = -1;
__pyx_L0:;
__Pyx_RefNannyFinishContext();
return __pyx_r;
} | 103 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_pf_17clickhouse_driver_14bufferedreader_14BufferedReader_8position___get__ | __pyx_pf_17clickhouse_driver_14bufferedreader_14BufferedReader_8position___get__( struct __pyx_obj_17clickhouse_driver_14bufferedreader_BufferedReader * __pyx_v_self) | ['__pyx_v_self'] | static PyObject *__pyx_pf_17clickhouse_driver_14bufferedreader_14BufferedReader_8position___get__(struct __pyx_obj_17clickhouse_driver_14bufferedreader_BufferedReader *__pyx_v_self) {
PyObject *__pyx_r = NULL;
__Pyx_RefNannyDeclarations
PyObject *__pyx_t_1 = NULL;
__Pyx_RefNannySetupContext("__get__", 0);
__Pyx_XDECREF(__pyx_r);
__pyx_t_1 = PyInt_FromSsize_t(__pyx_v_self->position); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 11, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__pyx_r = __pyx_t_1;
__pyx_t_1 = 0;
goto __pyx_L0;
/* function exit code */
__pyx_L1_error:;
__Pyx_XDECREF(__pyx_t_1);
__Pyx_AddTraceback("clickhouse_driver.bufferedreader.BufferedReader.position.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename);
__pyx_r = NULL;
__pyx_L0:;
__Pyx_XGIVEREF(__pyx_r);
__Pyx_RefNannyFinishContext();
return __pyx_r;
} | 113 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_pf_17clickhouse_driver_14bufferedreader_14BufferedReader_8read_strings | __pyx_pf_17clickhouse_driver_14bufferedreader_14BufferedReader_8read_strings( struct __pyx_obj_17clickhouse_driver_14bufferedreader_BufferedReader * __pyx_v_self , Py_ssize_t __pyx_v_n_items , PyObject * __pyx_v_encoding) | ['__pyx_v_self', '__pyx_v_n_items', '__pyx_v_encoding'] | static PyObject *__pyx_pf_17clickhouse_driver_14bufferedreader_14BufferedReader_8read_strings(struct __pyx_obj_17clickhouse_driver_14bufferedreader_BufferedReader *__pyx_v_self, Py_ssize_t __pyx_v_n_items, PyObject *__pyx_v_encoding) {
PyObject *__pyx_v_items = NULL;
Py_ssize_t __pyx_v_i;
char *__pyx_v_buffer_ptr;
Py_ssize_t __pyx_v_right;
Py_ssize_t __pyx_v_size;
Py_ssize_t __pyx_v_shift;
Py_ssize_t __pyx_v_bytes_read;
unsigned char __pyx_v_b;
char *__pyx_v_c_string;
Py_ssize_t __pyx_v_c_string_size;
char *__pyx_v_c_encoding;
PyObject *__pyx_v_rv = 0;
PyObject *__pyx_r = NULL;
__Pyx_RefNannyDeclarations
PyObject *__pyx_t_1 = NULL;
int __pyx_t_2;
PyObject *__pyx_t_3 = NULL;
PyObject *__pyx_t_4 = NULL;
char *__pyx_t_5;
Py_ssize_t __pyx_t_6;
Py_ssize_t __pyx_t_7;
Py_ssize_t __pyx_t_8;
Py_ssize_t __pyx_t_9;
Py_ssize_t __pyx_t_10;
Py_ssize_t __pyx_t_11;
PyObject *__pyx_t_12 = NULL;
PyObject *__pyx_t_13 = NULL;
PyObject *__pyx_t_14 = NULL;
int __pyx_t_15;
PyObject *__pyx_t_16 = NULL;
__Pyx_RefNannySetupContext("read_strings", 0);
__Pyx_INCREF(__pyx_v_encoding);
/* "clickhouse_driver/bufferedreader.pyx":67
* We inline strings reading logic here to avoid this overhead.
* """
* items = PyTuple_New(n_items) # <<<<<<<<<<<<<<
*
* cdef Py_ssize_t i
*/
__pyx_t_1 = PyTuple_New(__pyx_v_n_items); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 67, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__pyx_v_items = ((PyObject*)__pyx_t_1);
__pyx_t_1 = 0;
/* "clickhouse_driver/bufferedreader.pyx":71
* cdef Py_ssize_t i
* # Buffer vars
* cdef char* buffer_ptr = PyByteArray_AsString(self.buffer) # <<<<<<<<<<<<<<
* cdef Py_ssize_t right
* # String length vars
*/
__pyx_t_1 = __pyx_v_self->buffer;
__Pyx_INCREF(__pyx_t_1);
__pyx_v_buffer_ptr = PyByteArray_AsString(__pyx_t_1);
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
/* "clickhouse_driver/bufferedreader.pyx":78
*
* # String for decode vars.
* cdef char *c_string = NULL # <<<<<<<<<<<<<<
* cdef Py_ssize_t c_string_size = 1024
* cdef char *c_encoding = NULL
*/
__pyx_v_c_string = NULL;
/* "clickhouse_driver/bufferedreader.pyx":79
* # String for decode vars.
* cdef char *c_string = NULL
* cdef Py_ssize_t c_string_size = 1024 # <<<<<<<<<<<<<<
* cdef char *c_encoding = NULL
* if encoding:
*/
__pyx_v_c_string_size = 0x400;
/* "clickhouse_driver/bufferedreader.pyx":80
* cdef char *c_string = NULL
* cdef Py_ssize_t c_string_size = 1024
* cdef char *c_encoding = NULL # <<<<<<<<<<<<<<
* if encoding:
* encoding = encoding.encode('utf-8')
*/
__pyx_v_c_encoding = NULL;
/* "clickhouse_driver/bufferedreader.pyx":81
* cdef Py_ssize_t c_string_size = 1024
* cdef char *c_encoding = NULL
* if encoding: # <<<<<<<<<<<<<<
* encoding = encoding.encode('utf-8')
* c_encoding = encoding
*/
__pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_v_encoding); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(0, 81, __pyx_L1_error)
if (__pyx_t_2) {
/* "clickhouse_driver/bufferedreader.pyx":82
* cdef char *c_encoding = NULL
* if encoding:
* encoding = encoding.encode('utf-8') # <<<<<<<<<<<<<<
* c_encoding = encoding
*
*/
__pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_encoding, __pyx_n_s_encode); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 82, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__pyx_t_4 = NULL;
if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) {
__pyx_t_4 = PyMethod_GET_SELF(__pyx_t_3);
if (likely(__pyx_t_4)) {
PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);
__Pyx_INCREF(__pyx_t_4);
__Pyx_INCREF(function);
__Pyx_DECREF_SET(__pyx_t_3, function);
}
}
__pyx_t_1 = (__pyx_t_4) ? __Pyx_PyObject_Call2Args(__pyx_t_3, __pyx_t_4, __pyx_kp_u_utf_8) : __Pyx_PyObject_CallOneArg(__pyx_t_3, __pyx_kp_u_utf_8);
__Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 82, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
__Pyx_DECREF_SET(__pyx_v_encoding, __pyx_t_1);
__pyx_t_1 = 0;
/* "clickhouse_driver/bufferedreader.pyx":83
* if encoding:
* encoding = encoding.encode('utf-8')
* c_encoding = encoding # <<<<<<<<<<<<<<
*
* cdef object rv = object()
*/
__pyx_t_5 = __Pyx_PyObject_AsWritableString(__pyx_v_encoding); if (unlikely((!__pyx_t_5) && PyErr_Occurred())) __PYX_ERR(0, 83, __pyx_L1_error)
__pyx_v_c_encoding = __pyx_t_5;
/* "clickhouse_driver/bufferedreader.pyx":81
* cdef Py_ssize_t c_string_size = 1024
* cdef char *c_encoding = NULL
* if encoding: # <<<<<<<<<<<<<<
* encoding = encoding.encode('utf-8')
* c_encoding = encoding
*/
}
/* "clickhouse_driver/bufferedreader.pyx":85
* c_encoding = encoding
*
* cdef object rv = object() # <<<<<<<<<<<<<<
* # String for decode vars.
* if c_encoding:
*/
__pyx_t_1 = __Pyx_PyObject_CallNoArg(__pyx_builtin_object); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 85, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__pyx_v_rv = __pyx_t_1;
__pyx_t_1 = 0;
/* "clickhouse_driver/bufferedreader.pyx":87
* cdef object rv = object()
* # String for decode vars.
* if c_encoding: # <<<<<<<<<<<<<<
* c_string = <char *> PyMem_Realloc(NULL, c_string_size)
*
*/
__pyx_t_2 = (__pyx_v_c_encoding != 0);
if (__pyx_t_2) {
/* "clickhouse_driver/bufferedreader.pyx":88
* # String for decode vars.
* if c_encoding:
* c_string = <char *> PyMem_Realloc(NULL, c_string_size) # <<<<<<<<<<<<<<
*
* for i in range(n_items):
*/
__pyx_v_c_string = ((char *)PyMem_Realloc(NULL, __pyx_v_c_string_size));
/* "clickhouse_driver/bufferedreader.pyx":87
* cdef object rv = object()
* # String for decode vars.
* if c_encoding: # <<<<<<<<<<<<<<
* c_string = <char *> PyMem_Realloc(NULL, c_string_size)
*
*/
}
/* "clickhouse_driver/bufferedreader.pyx":90
* c_string = <char *> PyMem_Realloc(NULL, c_string_size)
*
* for i in range(n_items): # <<<<<<<<<<<<<<
* shift = size = 0
*
*/
__pyx_t_6 = __pyx_v_n_items;
__pyx_t_7 = __pyx_t_6;
for (__pyx_t_8 = 0; __pyx_t_8 < __pyx_t_7; __pyx_t_8+=1) {
__pyx_v_i = __pyx_t_8;
/* "clickhouse_driver/bufferedreader.pyx":91
*
* for i in range(n_items):
* shift = size = 0 # <<<<<<<<<<<<<<
*
* # Read string size
*/
__pyx_v_shift = 0;
__pyx_v_size = 0;
/* "clickhouse_driver/bufferedreader.pyx":94
*
* # Read string size
* while True: # <<<<<<<<<<<<<<
* if self.position == self.current_buffer_size:
* self.read_into_buffer()
*/
while (1) {
/* "clickhouse_driver/bufferedreader.pyx":95
* # Read string size
* while True:
* if self.position == self.current_buffer_size: # <<<<<<<<<<<<<<
* self.read_into_buffer()
* # `read_into_buffer` can override buffer
*/
__pyx_t_2 = ((__pyx_v_self->position == __pyx_v_self->current_buffer_size) != 0);
if (__pyx_t_2) {
/* "clickhouse_driver/bufferedreader.pyx":96
* while True:
* if self.position == self.current_buffer_size:
* self.read_into_buffer() # <<<<<<<<<<<<<<
* # `read_into_buffer` can override buffer
* buffer_ptr = PyByteArray_AsString(self.buffer)
*/
__pyx_t_3 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_read_into_buffer); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 96, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__pyx_t_4 = NULL;
if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) {
__pyx_t_4 = PyMethod_GET_SELF(__pyx_t_3);
if (likely(__pyx_t_4)) {
PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);
__Pyx_INCREF(__pyx_t_4);
__Pyx_INCREF(function);
__Pyx_DECREF_SET(__pyx_t_3, function);
}
}
__pyx_t_1 = (__pyx_t_4) ? __Pyx_PyObject_CallOneArg(__pyx_t_3, __pyx_t_4) : __Pyx_PyObject_CallNoArg(__pyx_t_3);
__Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 96, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
/* "clickhouse_driver/bufferedreader.pyx":98
* self.read_into_buffer()
* # `read_into_buffer` can override buffer
* buffer_ptr = PyByteArray_AsString(self.buffer) # <<<<<<<<<<<<<<
* self.position = 0
*
*/
__pyx_t_1 = __pyx_v_self->buffer;
__Pyx_INCREF(__pyx_t_1);
__pyx_v_buffer_ptr = PyByteArray_AsString(__pyx_t_1);
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
/* "clickhouse_driver/bufferedreader.pyx":99
* # `read_into_buffer` can override buffer
* buffer_ptr = PyByteArray_AsString(self.buffer)
* self.position = 0 # <<<<<<<<<<<<<<
*
* b = buffer_ptr[self.position]
*/
__pyx_v_self->position = 0;
/* "clickhouse_driver/bufferedreader.pyx":95
* # Read string size
* while True:
* if self.position == self.current_buffer_size: # <<<<<<<<<<<<<<
* self.read_into_buffer()
* # `read_into_buffer` can override buffer
*/
}
/* "clickhouse_driver/bufferedreader.pyx":101
* self.position = 0
*
* b = buffer_ptr[self.position] # <<<<<<<<<<<<<<
* self.position += 1
*
*/
__pyx_v_b = (__pyx_v_buffer_ptr[__pyx_v_self->position]);
/* "clickhouse_driver/bufferedreader.pyx":102
*
* b = buffer_ptr[self.position]
* self.position += 1 # <<<<<<<<<<<<<<
*
* size |= (b & 0x7f) << shift
*/
__pyx_v_self->position = (__pyx_v_self->position + 1);
/* "clickhouse_driver/bufferedreader.pyx":104
* self.position += 1
*
* size |= (b & 0x7f) << shift # <<<<<<<<<<<<<<
* if b < 0x80:
* break
*/
__pyx_v_size = (__pyx_v_size | ((__pyx_v_b & 0x7f) << __pyx_v_shift));
/* "clickhouse_driver/bufferedreader.pyx":105
*
* size |= (b & 0x7f) << shift
* if b < 0x80: # <<<<<<<<<<<<<<
* break
*
*/
__pyx_t_2 = ((__pyx_v_b < 0x80) != 0);
if (__pyx_t_2) {
/* "clickhouse_driver/bufferedreader.pyx":106
* size |= (b & 0x7f) << shift
* if b < 0x80:
* break # <<<<<<<<<<<<<<
*
* shift += 7
*/
goto __pyx_L8_break;
/* "clickhouse_driver/bufferedreader.pyx":105
*
* size |= (b & 0x7f) << shift
* if b < 0x80: # <<<<<<<<<<<<<<
* break
*
*/
}
/* "clickhouse_driver/bufferedreader.pyx":108
* break
*
* shift += 7 # <<<<<<<<<<<<<<
*
* right = self.position + size
*/
__pyx_v_shift = (__pyx_v_shift + 7);
}
__pyx_L8_break:;
/* "clickhouse_driver/bufferedreader.pyx":110
* shift += 7
*
* right = self.position + size # <<<<<<<<<<<<<<
*
* if c_encoding:
*/
__pyx_v_right = (__pyx_v_self->position + __pyx_v_size);
/* "clickhouse_driver/bufferedreader.pyx":112
* right = self.position + size
*
* if c_encoding: # <<<<<<<<<<<<<<
* if size + 1 > c_string_size:
* c_string_size = size + 1
*/
__pyx_t_2 = (__pyx_v_c_encoding != 0);
if (__pyx_t_2) {
/* "clickhouse_driver/bufferedreader.pyx":113
*
* if c_encoding:
* if size + 1 > c_string_size: # <<<<<<<<<<<<<<
* c_string_size = size + 1
* c_string = <char *> PyMem_Realloc(c_string, c_string_size)
*/
__pyx_t_2 = (((__pyx_v_size + 1) > __pyx_v_c_string_size) != 0);
if (__pyx_t_2) {
/* "clickhouse_driver/bufferedreader.pyx":114
* if c_encoding:
* if size + 1 > c_string_size:
* c_string_size = size + 1 # <<<<<<<<<<<<<<
* c_string = <char *> PyMem_Realloc(c_string, c_string_size)
* if c_string is NULL:
*/
__pyx_v_c_string_size = (__pyx_v_size + 1);
/* "clickhouse_driver/bufferedreader.pyx":115
* if size + 1 > c_string_size:
* c_string_size = size + 1
* c_string = <char *> PyMem_Realloc(c_string, c_string_size) # <<<<<<<<<<<<<<
* if c_string is NULL:
* raise MemoryError()
*/
__pyx_v_c_string = ((char *)PyMem_Realloc(__pyx_v_c_string, __pyx_v_c_string_size));
/* "clickhouse_driver/bufferedreader.pyx":116
* c_string_size = size + 1
* c_string = <char *> PyMem_Realloc(c_string, c_string_size)
* if c_string is NULL: # <<<<<<<<<<<<<<
* raise MemoryError()
* c_string[size] = 0
*/
__pyx_t_2 = ((__pyx_v_c_string == NULL) != 0);
if (unlikely(__pyx_t_2)) {
/* "clickhouse_driver/bufferedreader.pyx":117
* c_string = <char *> PyMem_Realloc(c_string, c_string_size)
* if c_string is NULL:
* raise MemoryError() # <<<<<<<<<<<<<<
* c_string[size] = 0
* bytes_read = 0
*/
PyErr_NoMemory(); __PYX_ERR(0, 117, __pyx_L1_error)
/* "clickhouse_driver/bufferedreader.pyx":116
* c_string_size = size + 1
* c_string = <char *> PyMem_Realloc(c_string, c_string_size)
* if c_string is NULL: # <<<<<<<<<<<<<<
* raise MemoryError()
* c_string[size] = 0
*/
}
/* "clickhouse_driver/bufferedreader.pyx":113
*
* if c_encoding:
* if size + 1 > c_string_size: # <<<<<<<<<<<<<<
* c_string_size = size + 1
* c_string = <char *> PyMem_Realloc(c_string, c_string_size)
*/
}
/* "clickhouse_driver/bufferedreader.pyx":118
* if c_string is NULL:
* raise MemoryError()
* c_string[size] = 0 # <<<<<<<<<<<<<<
* bytes_read = 0
*
*/
(__pyx_v_c_string[__pyx_v_size]) = 0;
/* "clickhouse_driver/bufferedreader.pyx":119
* raise MemoryError()
* c_string[size] = 0
* bytes_read = 0 # <<<<<<<<<<<<<<
*
* # Decoding pure c strings in Cython is faster than in pure Python.
*/
__pyx_v_bytes_read = 0;
/* "clickhouse_driver/bufferedreader.pyx":112
* right = self.position + size
*
* if c_encoding: # <<<<<<<<<<<<<<
* if size + 1 > c_string_size:
* c_string_size = size + 1
*/
}
/* "clickhouse_driver/bufferedreader.pyx":124
* # We need to copy it into buffer for adding null symbol at the end.
* # In ClickHouse block there is no null
* if right > self.current_buffer_size: # <<<<<<<<<<<<<<
* if c_encoding:
* memcpy(&c_string[bytes_read], &buffer_ptr[self.position],
*/
__pyx_t_2 = ((__pyx_v_right > __pyx_v_self->current_buffer_size) != 0);
if (__pyx_t_2) {
/* "clickhouse_driver/bufferedreader.pyx":125
* # In ClickHouse block there is no null
* if right > self.current_buffer_size:
* if c_encoding: # <<<<<<<<<<<<<<
* memcpy(&c_string[bytes_read], &buffer_ptr[self.position],
* self.current_buffer_size - self.position)
*/
__pyx_t_2 = (__pyx_v_c_encoding != 0);
if (__pyx_t_2) {
/* "clickhouse_driver/bufferedreader.pyx":126
* if right > self.current_buffer_size:
* if c_encoding:
* memcpy(&c_string[bytes_read], &buffer_ptr[self.position], # <<<<<<<<<<<<<<
* self.current_buffer_size - self.position)
* else:
*/
(void)(memcpy((&(__pyx_v_c_string[__pyx_v_bytes_read])), (&(__pyx_v_buffer_ptr[__pyx_v_self->position])), (__pyx_v_self->current_buffer_size - __pyx_v_self->position)));
/* "clickhouse_driver/bufferedreader.pyx":125
* # In ClickHouse block there is no null
* if right > self.current_buffer_size:
* if c_encoding: # <<<<<<<<<<<<<<
* memcpy(&c_string[bytes_read], &buffer_ptr[self.position],
* self.current_buffer_size - self.position)
*/
goto __pyx_L15;
}
/* "clickhouse_driver/bufferedreader.pyx":129
* self.current_buffer_size - self.position)
* else:
* rv = PyBytes_FromStringAndSize( # <<<<<<<<<<<<<<
* &buffer_ptr[self.position],
* self.current_buffer_size - self.position
*/
/*else*/ {
/* "clickhouse_driver/bufferedreader.pyx":131
* rv = PyBytes_FromStringAndSize(
* &buffer_ptr[self.position],
* self.current_buffer_size - self.position # <<<<<<<<<<<<<<
* )
*
*/
__pyx_t_1 = PyBytes_FromStringAndSize((&(__pyx_v_buffer_ptr[__pyx_v_self->position])), (__pyx_v_self->current_buffer_size - __pyx_v_self->position)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 129, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__Pyx_DECREF_SET(__pyx_v_rv, __pyx_t_1);
__pyx_t_1 = 0;
}
__pyx_L15:;
/* "clickhouse_driver/bufferedreader.pyx":134
* )
*
* bytes_read = self.current_buffer_size - self.position # <<<<<<<<<<<<<<
* # Read the rest of the string.
* while bytes_read != size:
*/
__pyx_v_bytes_read = (__pyx_v_self->current_buffer_size - __pyx_v_self->position);
/* "clickhouse_driver/bufferedreader.pyx":136
* bytes_read = self.current_buffer_size - self.position
* # Read the rest of the string.
* while bytes_read != size: # <<<<<<<<<<<<<<
* self.position = size - bytes_read
*
*/
while (1) {
__pyx_t_2 = ((__pyx_v_bytes_read != __pyx_v_size) != 0);
if (!__pyx_t_2) break;
/* "clickhouse_driver/bufferedreader.pyx":137
* # Read the rest of the string.
* while bytes_read != size:
* self.position = size - bytes_read # <<<<<<<<<<<<<<
*
* self.read_into_buffer()
*/
__pyx_v_self->position = (__pyx_v_size - __pyx_v_bytes_read);
/* "clickhouse_driver/bufferedreader.pyx":139
* self.position = size - bytes_read
*
* self.read_into_buffer() # <<<<<<<<<<<<<<
* # `read_into_buffer` can override buffer
* buffer_ptr = PyByteArray_AsString(self.buffer)
*/
__pyx_t_3 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_read_into_buffer); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 139, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__pyx_t_4 = NULL;
if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) {
__pyx_t_4 = PyMethod_GET_SELF(__pyx_t_3);
if (likely(__pyx_t_4)) {
PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);
__Pyx_INCREF(__pyx_t_4);
__Pyx_INCREF(function);
__Pyx_DECREF_SET(__pyx_t_3, function);
}
}
__pyx_t_1 = (__pyx_t_4) ? __Pyx_PyObject_CallOneArg(__pyx_t_3, __pyx_t_4) : __Pyx_PyObject_CallNoArg(__pyx_t_3);
__Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 139, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
/* "clickhouse_driver/bufferedreader.pyx":141
* self.read_into_buffer()
* # `read_into_buffer` can override buffer
* buffer_ptr = PyByteArray_AsString(self.buffer) # <<<<<<<<<<<<<<
* # There can be not enough data in buffer.
* self.position = min(
*/
__pyx_t_1 = __pyx_v_self->buffer;
__Pyx_INCREF(__pyx_t_1);
__pyx_v_buffer_ptr = PyByteArray_AsString(__pyx_t_1);
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
/* "clickhouse_driver/bufferedreader.pyx":144
* # There can be not enough data in buffer.
* self.position = min(
* self.position, self.current_buffer_size # <<<<<<<<<<<<<<
* )
* if c_encoding:
*/
__pyx_t_9 = __pyx_v_self->current_buffer_size;
__pyx_t_10 = __pyx_v_self->position;
if (((__pyx_t_9 < __pyx_t_10) != 0)) {
__pyx_t_11 = __pyx_t_9;
} else {
__pyx_t_11 = __pyx_t_10;
}
/* "clickhouse_driver/bufferedreader.pyx":143
* buffer_ptr = PyByteArray_AsString(self.buffer)
* # There can be not enough data in buffer.
* self.position = min( # <<<<<<<<<<<<<<
* self.position, self.current_buffer_size
* )
*/
__pyx_v_self->position = __pyx_t_11;
/* "clickhouse_driver/bufferedreader.pyx":146
* self.position, self.current_buffer_size
* )
* if c_encoding: # <<<<<<<<<<<<<<
* memcpy(
* &c_string[bytes_read], buffer_ptr, self.position
*/
__pyx_t_2 = (__pyx_v_c_encoding != 0);
if (__pyx_t_2) {
/* "clickhouse_driver/bufferedreader.pyx":147
* )
* if c_encoding:
* memcpy( # <<<<<<<<<<<<<<
* &c_string[bytes_read], buffer_ptr, self.position
* )
*/
(void)(memcpy((&(__pyx_v_c_string[__pyx_v_bytes_read])), __pyx_v_buffer_ptr, __pyx_v_self->position));
/* "clickhouse_driver/bufferedreader.pyx":146
* self.position, self.current_buffer_size
* )
* if c_encoding: # <<<<<<<<<<<<<<
* memcpy(
* &c_string[bytes_read], buffer_ptr, self.position
*/
goto __pyx_L18;
}
/* "clickhouse_driver/bufferedreader.pyx":151
* )
* else:
* rv += PyBytes_FromStringAndSize( # <<<<<<<<<<<<<<
* buffer_ptr, self.position
* )
*/
/*else*/ {
/* "clickhouse_driver/bufferedreader.pyx":152
* else:
* rv += PyBytes_FromStringAndSize(
* buffer_ptr, self.position # <<<<<<<<<<<<<<
* )
* bytes_read += self.position
*/
__pyx_t_1 = PyBytes_FromStringAndSize(__pyx_v_buffer_ptr, __pyx_v_self->position); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 151, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
/* "clickhouse_driver/bufferedreader.pyx":151
* )
* else:
* rv += PyBytes_FromStringAndSize( # <<<<<<<<<<<<<<
* buffer_ptr, self.position
* )
*/
__pyx_t_3 = PyNumber_InPlaceAdd(__pyx_v_rv, __pyx_t_1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 151, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
__Pyx_DECREF_SET(__pyx_v_rv, __pyx_t_3);
__pyx_t_3 = 0;
}
__pyx_L18:;
/* "clickhouse_driver/bufferedreader.pyx":154
* buffer_ptr, self.position
* )
* bytes_read += self.position # <<<<<<<<<<<<<<
*
* else:
*/
__pyx_v_bytes_read = (__pyx_v_bytes_read + __pyx_v_self->position);
}
/* "clickhouse_driver/bufferedreader.pyx":124
* # We need to copy it into buffer for adding null symbol at the end.
* # In ClickHouse block there is no null
* if right > self.current_buffer_size: # <<<<<<<<<<<<<<
* if c_encoding:
* memcpy(&c_string[bytes_read], &buffer_ptr[self.position],
*/
goto __pyx_L14;
}
/* "clickhouse_driver/bufferedreader.pyx":157
*
* else:
* if c_encoding: # <<<<<<<<<<<<<<
* memcpy(c_string, &buffer_ptr[self.position], size)
* else:
*/
/*else*/ {
__pyx_t_2 = (__pyx_v_c_encoding != 0);
if (__pyx_t_2) {
/* "clickhouse_driver/bufferedreader.pyx":158
* else:
* if c_encoding:
* memcpy(c_string, &buffer_ptr[self.position], size) # <<<<<<<<<<<<<<
* else:
* rv = PyBytes_FromStringAndSize(
*/
(void)(memcpy(__pyx_v_c_string, (&(__pyx_v_buffer_ptr[__pyx_v_self->position])), __pyx_v_size));
/* "clickhouse_driver/bufferedreader.pyx":157
*
* else:
* if c_encoding: # <<<<<<<<<<<<<<
* memcpy(c_string, &buffer_ptr[self.position], size)
* else:
*/
goto __pyx_L19;
}
/* "clickhouse_driver/bufferedreader.pyx":160
* memcpy(c_string, &buffer_ptr[self.position], size)
* else:
* rv = PyBytes_FromStringAndSize( # <<<<<<<<<<<<<<
* &buffer_ptr[self.position], size
* )
*/
/*else*/ {
/* "clickhouse_driver/bufferedreader.pyx":161
* else:
* rv = PyBytes_FromStringAndSize(
* &buffer_ptr[self.position], size # <<<<<<<<<<<<<<
* )
* self.position = right
*/
__pyx_t_3 = PyBytes_FromStringAndSize((&(__pyx_v_buffer_ptr[__pyx_v_self->position])), __pyx_v_size); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 160, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_DECREF_SET(__pyx_v_rv, __pyx_t_3);
__pyx_t_3 = 0;
}
__pyx_L19:;
/* "clickhouse_driver/bufferedreader.pyx":163
* &buffer_ptr[self.position], size
* )
* self.position = right # <<<<<<<<<<<<<<
*
* if c_encoding:
*/
__pyx_v_self->position = __pyx_v_right;
}
__pyx_L14:;
/* "clickhouse_driver/bufferedreader.pyx":165
* self.position = right
*
* if c_encoding: # <<<<<<<<<<<<<<
* try:
* rv = c_string[:size].decode(c_encoding)
*/
__pyx_t_2 = (__pyx_v_c_encoding != 0);
if (__pyx_t_2) {
/* "clickhouse_driver/bufferedreader.pyx":166
*
* if c_encoding:
* try: # <<<<<<<<<<<<<<
* rv = c_string[:size].decode(c_encoding)
* except UnicodeDecodeError:
*/
{
__Pyx_PyThreadState_declare
__Pyx_PyThreadState_assign
__Pyx_ExceptionSave(&__pyx_t_12, &__pyx_t_13, &__pyx_t_14);
__Pyx_XGOTREF(__pyx_t_12);
__Pyx_XGOTREF(__pyx_t_13);
__Pyx_XGOTREF(__pyx_t_14);
/*try:*/ {
/* "clickhouse_driver/bufferedreader.pyx":167
* if c_encoding:
* try:
* rv = c_string[:size].decode(c_encoding) # <<<<<<<<<<<<<<
* except UnicodeDecodeError:
* rv = PyBytes_FromStringAndSize(c_string, size)
*/
__pyx_t_3 = __Pyx_decode_c_string(__pyx_v_c_string, 0, __pyx_v_size, __pyx_v_c_encoding, NULL, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 167, __pyx_L21_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_DECREF_SET(__pyx_v_rv, __pyx_t_3);
__pyx_t_3 = 0;
/* "clickhouse_driver/bufferedreader.pyx":166
*
* if c_encoding:
* try: # <<<<<<<<<<<<<<
* rv = c_string[:size].decode(c_encoding)
* except UnicodeDecodeError:
*/
}
__Pyx_XDECREF(__pyx_t_12); __pyx_t_12 = 0;
__Pyx_XDECREF(__pyx_t_13); __pyx_t_13 = 0;
__Pyx_XDECREF(__pyx_t_14); __pyx_t_14 = 0;
goto __pyx_L28_try_end;
__pyx_L21_error:;
__Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0;
__Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
__Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
/* "clickhouse_driver/bufferedreader.pyx":168
* try:
* rv = c_string[:size].decode(c_encoding)
* except UnicodeDecodeError: # <<<<<<<<<<<<<<
* rv = PyBytes_FromStringAndSize(c_string, size)
*
*/
__pyx_t_15 = __Pyx_PyErr_ExceptionMatches(__pyx_builtin_UnicodeDecodeError);
if (__pyx_t_15) {
__Pyx_AddTraceback("clickhouse_driver.bufferedreader.BufferedReader.read_strings", __pyx_clineno, __pyx_lineno, __pyx_filename);
if (__Pyx_GetException(&__pyx_t_3, &__pyx_t_1, &__pyx_t_4) < 0) __PYX_ERR(0, 168, __pyx_L23_except_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_GOTREF(__pyx_t_1);
__Pyx_GOTREF(__pyx_t_4);
/* "clickhouse_driver/bufferedreader.pyx":169
* rv = c_string[:size].decode(c_encoding)
* except UnicodeDecodeError:
* rv = PyBytes_FromStringAndSize(c_string, size) # <<<<<<<<<<<<<<
*
* Py_INCREF(rv)
*/
__pyx_t_16 = PyBytes_FromStringAndSize(__pyx_v_c_string, __pyx_v_size); if (unlikely(!__pyx_t_16)) __PYX_ERR(0, 169, __pyx_L23_except_error)
__Pyx_GOTREF(__pyx_t_16);
__Pyx_DECREF_SET(__pyx_v_rv, __pyx_t_16);
__pyx_t_16 = 0;
__Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
__Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0;
__Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
goto __pyx_L22_exception_handled;
}
goto __pyx_L23_except_error;
__pyx_L23_except_error:;
/* "clickhouse_driver/bufferedreader.pyx":166
*
* if c_encoding:
* try: # <<<<<<<<<<<<<<
* rv = c_string[:size].decode(c_encoding)
* except UnicodeDecodeError:
*/
__Pyx_XGIVEREF(__pyx_t_12);
__Pyx_XGIVEREF(__pyx_t_13);
__Pyx_XGIVEREF(__pyx_t_14);
__Pyx_ExceptionReset(__pyx_t_12, __pyx_t_13, __pyx_t_14);
goto __pyx_L1_error;
__pyx_L22_exception_handled:;
__Pyx_XGIVEREF(__pyx_t_12);
__Pyx_XGIVEREF(__pyx_t_13);
__Pyx_XGIVEREF(__pyx_t_14);
__Pyx_ExceptionReset(__pyx_t_12, __pyx_t_13, __pyx_t_14);
__pyx_L28_try_end:;
}
/* "clickhouse_driver/bufferedreader.pyx":165
* self.position = right
*
* if c_encoding: # <<<<<<<<<<<<<<
* try:
* rv = c_string[:size].decode(c_encoding)
*/
}
/* "clickhouse_driver/bufferedreader.pyx":171
* rv = PyBytes_FromStringAndSize(c_string, size)
*
* Py_INCREF(rv) # <<<<<<<<<<<<<<
* PyTuple_SET_ITEM(items, i, rv)
*
*/
Py_INCREF(__pyx_v_rv);
/* "clickhouse_driver/bufferedreader.pyx":172
*
* Py_INCREF(rv)
* PyTuple_SET_ITEM(items, i, rv) # <<<<<<<<<<<<<<
*
* if c_string:
*/
PyTuple_SET_ITEM(__pyx_v_items, __pyx_v_i, __pyx_v_rv);
}
/* "clickhouse_driver/bufferedreader.pyx":174
* PyTuple_SET_ITEM(items, i, rv)
*
* if c_string: # <<<<<<<<<<<<<<
* PyMem_Free(c_string)
*
*/
__pyx_t_2 = (__pyx_v_c_string != 0);
if (__pyx_t_2) {
/* "clickhouse_driver/bufferedreader.pyx":175
*
* if c_string:
* PyMem_Free(c_string) # <<<<<<<<<<<<<<
*
* return items
*/
PyMem_Free(__pyx_v_c_string);
/* "clickhouse_driver/bufferedreader.pyx":174
* PyTuple_SET_ITEM(items, i, rv)
*
* if c_string: # <<<<<<<<<<<<<<
* PyMem_Free(c_string)
*
*/
}
/* "clickhouse_driver/bufferedreader.pyx":177
* PyMem_Free(c_string)
*
* return items # <<<<<<<<<<<<<<
*
*
*/
__Pyx_XDECREF(__pyx_r);
__Pyx_INCREF(__pyx_v_items);
__pyx_r = __pyx_v_items;
goto __pyx_L0;
/* "clickhouse_driver/bufferedreader.pyx":62
* return rv
*
* def read_strings(self, Py_ssize_t n_items, encoding=None): # <<<<<<<<<<<<<<
* """
* Python has great overhead between function calls.
*/
/* function exit code */
__pyx_L1_error:;
__Pyx_XDECREF(__pyx_t_1);
__Pyx_XDECREF(__pyx_t_3);
__Pyx_XDECREF(__pyx_t_4);
__Pyx_XDECREF(__pyx_t_16);
__Pyx_AddTraceback("clickhouse_driver.bufferedreader.BufferedReader.read_strings", __pyx_clineno, __pyx_lineno, __pyx_filename);
__pyx_r = NULL;
__pyx_L0:;
__Pyx_XDECREF(__pyx_v_items);
__Pyx_XDECREF(__pyx_v_rv);
__Pyx_XDECREF(__pyx_v_encoding);
__Pyx_XGIVEREF(__pyx_r);
__Pyx_RefNannyFinishContext();
return __pyx_r;
} | 2108 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_pf_17clickhouse_driver_14bufferedreader_20BufferedSocketReader_2read_into_buffer | __pyx_pf_17clickhouse_driver_14bufferedreader_20BufferedSocketReader_2read_into_buffer( struct __pyx_obj_17clickhouse_driver_14bufferedreader_BufferedSocketReader * __pyx_v_self) | ['__pyx_v_self'] | static PyObject *__pyx_pf_17clickhouse_driver_14bufferedreader_20BufferedSocketReader_2read_into_buffer(struct __pyx_obj_17clickhouse_driver_14bufferedreader_BufferedSocketReader *__pyx_v_self) {
PyObject *__pyx_r = NULL;
__Pyx_RefNannyDeclarations
PyObject *__pyx_t_1 = NULL;
PyObject *__pyx_t_2 = NULL;
PyObject *__pyx_t_3 = NULL;
Py_ssize_t __pyx_t_4;
int __pyx_t_5;
__Pyx_RefNannySetupContext("read_into_buffer", 0);
/* "clickhouse_driver/bufferedreader.pyx":188
*
* def read_into_buffer(self):
* self.current_buffer_size = self.sock.recv_into(self.buffer) # <<<<<<<<<<<<<<
*
* if self.current_buffer_size == 0:
*/
__pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_v_self->sock, __pyx_n_s_recv_into); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 188, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__pyx_t_3 = NULL;
if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
__pyx_t_3 = PyMethod_GET_SELF(__pyx_t_2);
if (likely(__pyx_t_3)) {
PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
__Pyx_INCREF(__pyx_t_3);
__Pyx_INCREF(function);
__Pyx_DECREF_SET(__pyx_t_2, function);
}
}
__pyx_t_1 = (__pyx_t_3) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_3, __pyx_v_self->__pyx_base.buffer) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_v_self->__pyx_base.buffer);
__Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0;
if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 188, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
__pyx_t_4 = __Pyx_PyIndex_AsSsize_t(__pyx_t_1); if (unlikely((__pyx_t_4 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 188, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
__pyx_v_self->__pyx_base.current_buffer_size = __pyx_t_4;
/* "clickhouse_driver/bufferedreader.pyx":190
* self.current_buffer_size = self.sock.recv_into(self.buffer)
*
* if self.current_buffer_size == 0: # <<<<<<<<<<<<<<
* raise EOFError('Unexpected EOF while reading bytes')
*
*/
__pyx_t_5 = ((__pyx_v_self->__pyx_base.current_buffer_size == 0) != 0);
if (unlikely(__pyx_t_5)) {
/* "clickhouse_driver/bufferedreader.pyx":191
*
* if self.current_buffer_size == 0:
* raise EOFError('Unexpected EOF while reading bytes') # <<<<<<<<<<<<<<
*
*
*/
__pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_EOFError, __pyx_tuple_, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 191, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__Pyx_Raise(__pyx_t_1, 0, 0, 0);
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
__PYX_ERR(0, 191, __pyx_L1_error)
/* "clickhouse_driver/bufferedreader.pyx":190
* self.current_buffer_size = self.sock.recv_into(self.buffer)
*
* if self.current_buffer_size == 0: # <<<<<<<<<<<<<<
* raise EOFError('Unexpected EOF while reading bytes')
*
*/
}
/* "clickhouse_driver/bufferedreader.pyx":187
* super(BufferedSocketReader, self).__init__(bufsize)
*
* def read_into_buffer(self): # <<<<<<<<<<<<<<
* self.current_buffer_size = self.sock.recv_into(self.buffer)
*
*/
/* function exit code */
__pyx_r = Py_None; __Pyx_INCREF(Py_None);
goto __pyx_L0;
__pyx_L1_error:;
__Pyx_XDECREF(__pyx_t_1);
__Pyx_XDECREF(__pyx_t_2);
__Pyx_XDECREF(__pyx_t_3);
__Pyx_AddTraceback("clickhouse_driver.bufferedreader.BufferedSocketReader.read_into_buffer", __pyx_clineno, __pyx_lineno, __pyx_filename);
__pyx_r = NULL;
__pyx_L0:;
__Pyx_XGIVEREF(__pyx_r);
__Pyx_RefNannyFinishContext();
return __pyx_r;
} | 404 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_pf_17clickhouse_driver_14bufferedreader_20BufferedSocketReader_4__reduce_cython__ | __pyx_pf_17clickhouse_driver_14bufferedreader_20BufferedSocketReader_4__reduce_cython__( struct __pyx_obj_17clickhouse_driver_14bufferedreader_BufferedSocketReader * __pyx_v_self) | ['__pyx_v_self'] | static PyObject *__pyx_pf_17clickhouse_driver_14bufferedreader_20BufferedSocketReader_4__reduce_cython__(struct __pyx_obj_17clickhouse_driver_14bufferedreader_BufferedSocketReader *__pyx_v_self) {
PyObject *__pyx_v_state = 0;
PyObject *__pyx_v__dict = 0;
int __pyx_v_use_setstate;
PyObject *__pyx_r = NULL;
__Pyx_RefNannyDeclarations
PyObject *__pyx_t_1 = NULL;
PyObject *__pyx_t_2 = NULL;
PyObject *__pyx_t_3 = NULL;
int __pyx_t_4;
int __pyx_t_5;
int __pyx_t_6;
__Pyx_RefNannySetupContext("__reduce_cython__", 0);
/* "(tree fragment)":5
* cdef object _dict
* cdef bint use_setstate
* state = (self.buffer, self.current_buffer_size, self.position, self.sock) # <<<<<<<<<<<<<<
* _dict = getattr(self, '__dict__', None)
* if _dict is not None:
*/
__pyx_t_1 = PyInt_FromSsize_t(__pyx_v_self->__pyx_base.current_buffer_size); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__pyx_t_2 = PyInt_FromSsize_t(__pyx_v_self->__pyx_base.position); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__pyx_t_3 = PyTuple_New(4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_INCREF(__pyx_v_self->__pyx_base.buffer);
__Pyx_GIVEREF(__pyx_v_self->__pyx_base.buffer);
PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_self->__pyx_base.buffer);
__Pyx_GIVEREF(__pyx_t_1);
PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1);
__Pyx_GIVEREF(__pyx_t_2);
PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2);
__Pyx_INCREF(__pyx_v_self->sock);
__Pyx_GIVEREF(__pyx_v_self->sock);
PyTuple_SET_ITEM(__pyx_t_3, 3, __pyx_v_self->sock);
__pyx_t_1 = 0;
__pyx_t_2 = 0;
__pyx_v_state = ((PyObject*)__pyx_t_3);
__pyx_t_3 = 0;
/* "(tree fragment)":6
* cdef bint use_setstate
* state = (self.buffer, self.current_buffer_size, self.position, self.sock)
* _dict = getattr(self, '__dict__', None) # <<<<<<<<<<<<<<
* if _dict is not None:
* state += (_dict,)
*/
__pyx_t_3 = __Pyx_GetAttr3(((PyObject *)__pyx_v_self), __pyx_n_s_dict, Py_None); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 6, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__pyx_v__dict = __pyx_t_3;
__pyx_t_3 = 0;
/* "(tree fragment)":7
* state = (self.buffer, self.current_buffer_size, self.position, self.sock)
* _dict = getattr(self, '__dict__', None)
* if _dict is not None: # <<<<<<<<<<<<<<
* state += (_dict,)
* use_setstate = True
*/
__pyx_t_4 = (__pyx_v__dict != Py_None);
__pyx_t_5 = (__pyx_t_4 != 0);
if (__pyx_t_5) {
/* "(tree fragment)":8
* _dict = getattr(self, '__dict__', None)
* if _dict is not None:
* state += (_dict,) # <<<<<<<<<<<<<<
* use_setstate = True
* else:
*/
__pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 8, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_INCREF(__pyx_v__dict);
__Pyx_GIVEREF(__pyx_v__dict);
PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v__dict);
__pyx_t_2 = PyNumber_InPlaceAdd(__pyx_v_state, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 8, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
__Pyx_DECREF_SET(__pyx_v_state, ((PyObject*)__pyx_t_2));
__pyx_t_2 = 0;
/* "(tree fragment)":9
* if _dict is not None:
* state += (_dict,)
* use_setstate = True # <<<<<<<<<<<<<<
* else:
* use_setstate = self.buffer is not None or self.sock is not None
*/
__pyx_v_use_setstate = 1;
/* "(tree fragment)":7
* state = (self.buffer, self.current_buffer_size, self.position, self.sock)
* _dict = getattr(self, '__dict__', None)
* if _dict is not None: # <<<<<<<<<<<<<<
* state += (_dict,)
* use_setstate = True
*/
goto __pyx_L3;
}
/* "(tree fragment)":11
* use_setstate = True
* else:
* use_setstate = self.buffer is not None or self.sock is not None # <<<<<<<<<<<<<<
* if use_setstate:
* return __pyx_unpickle_BufferedSocketReader, (type(self), 0xef9caf0, None), state
*/
/*else*/ {
__pyx_t_4 = (__pyx_v_self->__pyx_base.buffer != ((PyObject*)Py_None));
__pyx_t_6 = (__pyx_t_4 != 0);
if (!__pyx_t_6) {
} else {
__pyx_t_5 = __pyx_t_6;
goto __pyx_L4_bool_binop_done;
}
__pyx_t_6 = (__pyx_v_self->sock != Py_None);
__pyx_t_4 = (__pyx_t_6 != 0);
__pyx_t_5 = __pyx_t_4;
__pyx_L4_bool_binop_done:;
__pyx_v_use_setstate = __pyx_t_5;
}
__pyx_L3:;
/* "(tree fragment)":12
* else:
* use_setstate = self.buffer is not None or self.sock is not None
* if use_setstate: # <<<<<<<<<<<<<<
* return __pyx_unpickle_BufferedSocketReader, (type(self), 0xef9caf0, None), state
* else:
*/
__pyx_t_5 = (__pyx_v_use_setstate != 0);
if (__pyx_t_5) {
/* "(tree fragment)":13
* use_setstate = self.buffer is not None or self.sock is not None
* if use_setstate:
* return __pyx_unpickle_BufferedSocketReader, (type(self), 0xef9caf0, None), state # <<<<<<<<<<<<<<
* else:
* return __pyx_unpickle_BufferedSocketReader, (type(self), 0xef9caf0, state)
*/
__Pyx_XDECREF(__pyx_r);
__Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_pyx_unpickle_BufferedSocketRea); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 13, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 13, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_INCREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
__Pyx_GIVEREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
__Pyx_INCREF(__pyx_int_251251440);
__Pyx_GIVEREF(__pyx_int_251251440);
PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_int_251251440);
__Pyx_INCREF(Py_None);
__Pyx_GIVEREF(Py_None);
PyTuple_SET_ITEM(__pyx_t_3, 2, Py_None);
__pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 13, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__Pyx_GIVEREF(__pyx_t_2);
PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_2);
__Pyx_GIVEREF(__pyx_t_3);
PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_t_3);
__Pyx_INCREF(__pyx_v_state);
__Pyx_GIVEREF(__pyx_v_state);
PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_v_state);
__pyx_t_2 = 0;
__pyx_t_3 = 0;
__pyx_r = __pyx_t_1;
__pyx_t_1 = 0;
goto __pyx_L0;
/* "(tree fragment)":12
* else:
* use_setstate = self.buffer is not None or self.sock is not None
* if use_setstate: # <<<<<<<<<<<<<<
* return __pyx_unpickle_BufferedSocketReader, (type(self), 0xef9caf0, None), state
* else:
*/
}
/* "(tree fragment)":15
* return __pyx_unpickle_BufferedSocketReader, (type(self), 0xef9caf0, None), state
* else:
* return __pyx_unpickle_BufferedSocketReader, (type(self), 0xef9caf0, state) # <<<<<<<<<<<<<<
* def __setstate_cython__(self, __pyx_state):
* __pyx_unpickle_BufferedSocketReader__set_state(self, __pyx_state)
*/
/*else*/ {
__Pyx_XDECREF(__pyx_r);
__Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_pyx_unpickle_BufferedSocketRea); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 15, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 15, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_INCREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
__Pyx_GIVEREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
__Pyx_INCREF(__pyx_int_251251440);
__Pyx_GIVEREF(__pyx_int_251251440);
PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_int_251251440);
__Pyx_INCREF(__pyx_v_state);
__Pyx_GIVEREF(__pyx_v_state);
PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_v_state);
__pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 15, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__Pyx_GIVEREF(__pyx_t_1);
PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_1);
__Pyx_GIVEREF(__pyx_t_3);
PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_3);
__pyx_t_1 = 0;
__pyx_t_3 = 0;
__pyx_r = __pyx_t_2;
__pyx_t_2 = 0;
goto __pyx_L0;
}
/* "(tree fragment)":1
* def __reduce_cython__(self): # <<<<<<<<<<<<<<
* cdef tuple state
* cdef object _dict
*/
/* function exit code */
__pyx_L1_error:;
__Pyx_XDECREF(__pyx_t_1);
__Pyx_XDECREF(__pyx_t_2);
__Pyx_XDECREF(__pyx_t_3);
__Pyx_AddTraceback("clickhouse_driver.bufferedreader.BufferedSocketReader.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
__pyx_r = NULL;
__pyx_L0:;
__Pyx_XDECREF(__pyx_v_state);
__Pyx_XDECREF(__pyx_v__dict);
__Pyx_XGIVEREF(__pyx_r);
__Pyx_RefNannyFinishContext();
return __pyx_r;
} | 1087 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_pf_17clickhouse_driver_14bufferedreader_24CompressedBufferedReader_4__reduce_cython__ | __pyx_pf_17clickhouse_driver_14bufferedreader_24CompressedBufferedReader_4__reduce_cython__( struct __pyx_obj_17clickhouse_driver_14bufferedreader_CompressedBufferedReader * __pyx_v_self) | ['__pyx_v_self'] | static PyObject *__pyx_pf_17clickhouse_driver_14bufferedreader_24CompressedBufferedReader_4__reduce_cython__(struct __pyx_obj_17clickhouse_driver_14bufferedreader_CompressedBufferedReader *__pyx_v_self) {
PyObject *__pyx_v_state = 0;
PyObject *__pyx_v__dict = 0;
int __pyx_v_use_setstate;
PyObject *__pyx_r = NULL;
__Pyx_RefNannyDeclarations
PyObject *__pyx_t_1 = NULL;
PyObject *__pyx_t_2 = NULL;
PyObject *__pyx_t_3 = NULL;
int __pyx_t_4;
int __pyx_t_5;
int __pyx_t_6;
__Pyx_RefNannySetupContext("__reduce_cython__", 0);
/* "(tree fragment)":5
* cdef object _dict
* cdef bint use_setstate
* state = (self.buffer, self.current_buffer_size, self.position, self.read_block) # <<<<<<<<<<<<<<
* _dict = getattr(self, '__dict__', None)
* if _dict is not None:
*/
__pyx_t_1 = PyInt_FromSsize_t(__pyx_v_self->__pyx_base.current_buffer_size); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__pyx_t_2 = PyInt_FromSsize_t(__pyx_v_self->__pyx_base.position); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__pyx_t_3 = PyTuple_New(4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_INCREF(__pyx_v_self->__pyx_base.buffer);
__Pyx_GIVEREF(__pyx_v_self->__pyx_base.buffer);
PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_self->__pyx_base.buffer);
__Pyx_GIVEREF(__pyx_t_1);
PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1);
__Pyx_GIVEREF(__pyx_t_2);
PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2);
__Pyx_INCREF(__pyx_v_self->read_block);
__Pyx_GIVEREF(__pyx_v_self->read_block);
PyTuple_SET_ITEM(__pyx_t_3, 3, __pyx_v_self->read_block);
__pyx_t_1 = 0;
__pyx_t_2 = 0;
__pyx_v_state = ((PyObject*)__pyx_t_3);
__pyx_t_3 = 0;
/* "(tree fragment)":6
* cdef bint use_setstate
* state = (self.buffer, self.current_buffer_size, self.position, self.read_block)
* _dict = getattr(self, '__dict__', None) # <<<<<<<<<<<<<<
* if _dict is not None:
* state += (_dict,)
*/
__pyx_t_3 = __Pyx_GetAttr3(((PyObject *)__pyx_v_self), __pyx_n_s_dict, Py_None); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 6, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__pyx_v__dict = __pyx_t_3;
__pyx_t_3 = 0;
/* "(tree fragment)":7
* state = (self.buffer, self.current_buffer_size, self.position, self.read_block)
* _dict = getattr(self, '__dict__', None)
* if _dict is not None: # <<<<<<<<<<<<<<
* state += (_dict,)
* use_setstate = True
*/
__pyx_t_4 = (__pyx_v__dict != Py_None);
__pyx_t_5 = (__pyx_t_4 != 0);
if (__pyx_t_5) {
/* "(tree fragment)":8
* _dict = getattr(self, '__dict__', None)
* if _dict is not None:
* state += (_dict,) # <<<<<<<<<<<<<<
* use_setstate = True
* else:
*/
__pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 8, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_INCREF(__pyx_v__dict);
__Pyx_GIVEREF(__pyx_v__dict);
PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v__dict);
__pyx_t_2 = PyNumber_InPlaceAdd(__pyx_v_state, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 8, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
__Pyx_DECREF_SET(__pyx_v_state, ((PyObject*)__pyx_t_2));
__pyx_t_2 = 0;
/* "(tree fragment)":9
* if _dict is not None:
* state += (_dict,)
* use_setstate = True # <<<<<<<<<<<<<<
* else:
* use_setstate = self.buffer is not None or self.read_block is not None
*/
__pyx_v_use_setstate = 1;
/* "(tree fragment)":7
* state = (self.buffer, self.current_buffer_size, self.position, self.read_block)
* _dict = getattr(self, '__dict__', None)
* if _dict is not None: # <<<<<<<<<<<<<<
* state += (_dict,)
* use_setstate = True
*/
goto __pyx_L3;
}
/* "(tree fragment)":11
* use_setstate = True
* else:
* use_setstate = self.buffer is not None or self.read_block is not None # <<<<<<<<<<<<<<
* if use_setstate:
* return __pyx_unpickle_CompressedBufferedReader, (type(self), 0x183c0eb, None), state
*/
/*else*/ {
__pyx_t_4 = (__pyx_v_self->__pyx_base.buffer != ((PyObject*)Py_None));
__pyx_t_6 = (__pyx_t_4 != 0);
if (!__pyx_t_6) {
} else {
__pyx_t_5 = __pyx_t_6;
goto __pyx_L4_bool_binop_done;
}
__pyx_t_6 = (__pyx_v_self->read_block != Py_None);
__pyx_t_4 = (__pyx_t_6 != 0);
__pyx_t_5 = __pyx_t_4;
__pyx_L4_bool_binop_done:;
__pyx_v_use_setstate = __pyx_t_5;
}
__pyx_L3:;
/* "(tree fragment)":12
* else:
* use_setstate = self.buffer is not None or self.read_block is not None
* if use_setstate: # <<<<<<<<<<<<<<
* return __pyx_unpickle_CompressedBufferedReader, (type(self), 0x183c0eb, None), state
* else:
*/
__pyx_t_5 = (__pyx_v_use_setstate != 0);
if (__pyx_t_5) {
/* "(tree fragment)":13
* use_setstate = self.buffer is not None or self.read_block is not None
* if use_setstate:
* return __pyx_unpickle_CompressedBufferedReader, (type(self), 0x183c0eb, None), state # <<<<<<<<<<<<<<
* else:
* return __pyx_unpickle_CompressedBufferedReader, (type(self), 0x183c0eb, state)
*/
__Pyx_XDECREF(__pyx_r);
__Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_pyx_unpickle_CompressedBuffere); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 13, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 13, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_INCREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
__Pyx_GIVEREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
__Pyx_INCREF(__pyx_int_25411819);
__Pyx_GIVEREF(__pyx_int_25411819);
PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_int_25411819);
__Pyx_INCREF(Py_None);
__Pyx_GIVEREF(Py_None);
PyTuple_SET_ITEM(__pyx_t_3, 2, Py_None);
__pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 13, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__Pyx_GIVEREF(__pyx_t_2);
PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_2);
__Pyx_GIVEREF(__pyx_t_3);
PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_t_3);
__Pyx_INCREF(__pyx_v_state);
__Pyx_GIVEREF(__pyx_v_state);
PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_v_state);
__pyx_t_2 = 0;
__pyx_t_3 = 0;
__pyx_r = __pyx_t_1;
__pyx_t_1 = 0;
goto __pyx_L0;
/* "(tree fragment)":12
* else:
* use_setstate = self.buffer is not None or self.read_block is not None
* if use_setstate: # <<<<<<<<<<<<<<
* return __pyx_unpickle_CompressedBufferedReader, (type(self), 0x183c0eb, None), state
* else:
*/
}
/* "(tree fragment)":15
* return __pyx_unpickle_CompressedBufferedReader, (type(self), 0x183c0eb, None), state
* else:
* return __pyx_unpickle_CompressedBufferedReader, (type(self), 0x183c0eb, state) # <<<<<<<<<<<<<<
* def __setstate_cython__(self, __pyx_state):
* __pyx_unpickle_CompressedBufferedReader__set_state(self, __pyx_state)
*/
/*else*/ {
__Pyx_XDECREF(__pyx_r);
__Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_pyx_unpickle_CompressedBuffere); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 15, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 15, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_INCREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
__Pyx_GIVEREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
__Pyx_INCREF(__pyx_int_25411819);
__Pyx_GIVEREF(__pyx_int_25411819);
PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_int_25411819);
__Pyx_INCREF(__pyx_v_state);
__Pyx_GIVEREF(__pyx_v_state);
PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_v_state);
__pyx_t_2 = PyTuple_New(2); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 15, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__Pyx_GIVEREF(__pyx_t_1);
PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_1);
__Pyx_GIVEREF(__pyx_t_3);
PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_3);
__pyx_t_1 = 0;
__pyx_t_3 = 0;
__pyx_r = __pyx_t_2;
__pyx_t_2 = 0;
goto __pyx_L0;
}
/* "(tree fragment)":1
* def __reduce_cython__(self): # <<<<<<<<<<<<<<
* cdef tuple state
* cdef object _dict
*/
/* function exit code */
__pyx_L1_error:;
__Pyx_XDECREF(__pyx_t_1);
__Pyx_XDECREF(__pyx_t_2);
__Pyx_XDECREF(__pyx_t_3);
__Pyx_AddTraceback("clickhouse_driver.bufferedreader.CompressedBufferedReader.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
__pyx_r = NULL;
__pyx_L0:;
__Pyx_XDECREF(__pyx_v_state);
__Pyx_XDECREF(__pyx_v__dict);
__Pyx_XGIVEREF(__pyx_r);
__Pyx_RefNannyFinishContext();
return __pyx_r;
} | 1087 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_pf_17clickhouse_driver_14bufferedreader_2__pyx_unpickle_BufferedSocketReader | __pyx_pf_17clickhouse_driver_14bufferedreader_2__pyx_unpickle_BufferedSocketReader( CYTHON_UNUSED PyObject * __pyx_self , PyObject * __pyx_v___pyx_type , long __pyx_v___pyx_checksum , PyObject * __pyx_v___pyx_state) | ['__pyx_self', '__pyx_v___pyx_type', '__pyx_v___pyx_checksum', '__pyx_v___pyx_state'] | static PyObject *__pyx_pf_17clickhouse_driver_14bufferedreader_2__pyx_unpickle_BufferedSocketReader(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v___pyx_type, long __pyx_v___pyx_checksum, PyObject *__pyx_v___pyx_state) {
PyObject *__pyx_v___pyx_PickleError = 0;
PyObject *__pyx_v___pyx_result = 0;
PyObject *__pyx_r = NULL;
__Pyx_RefNannyDeclarations
int __pyx_t_1;
PyObject *__pyx_t_2 = NULL;
PyObject *__pyx_t_3 = NULL;
PyObject *__pyx_t_4 = NULL;
PyObject *__pyx_t_5 = NULL;
int __pyx_t_6;
__Pyx_RefNannySetupContext("__pyx_unpickle_BufferedSocketReader", 0);
/* "(tree fragment)":4
* cdef object __pyx_PickleError
* cdef object __pyx_result
* if __pyx_checksum != 0xef9caf0: # <<<<<<<<<<<<<<
* from pickle import PickleError as __pyx_PickleError
* raise __pyx_PickleError("Incompatible checksums (%s vs 0xef9caf0 = (buffer, current_buffer_size, position, sock))" % __pyx_checksum)
*/
__pyx_t_1 = ((__pyx_v___pyx_checksum != 0xef9caf0) != 0);
if (__pyx_t_1) {
/* "(tree fragment)":5
* cdef object __pyx_result
* if __pyx_checksum != 0xef9caf0:
* from pickle import PickleError as __pyx_PickleError # <<<<<<<<<<<<<<
* raise __pyx_PickleError("Incompatible checksums (%s vs 0xef9caf0 = (buffer, current_buffer_size, position, sock))" % __pyx_checksum)
* __pyx_result = BufferedSocketReader.__new__(__pyx_type)
*/
__pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__Pyx_INCREF(__pyx_n_s_PickleError);
__Pyx_GIVEREF(__pyx_n_s_PickleError);
PyList_SET_ITEM(__pyx_t_2, 0, __pyx_n_s_PickleError);
__pyx_t_3 = __Pyx_Import(__pyx_n_s_pickle, __pyx_t_2, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
__pyx_t_2 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_PickleError); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__Pyx_INCREF(__pyx_t_2);
__pyx_v___pyx_PickleError = __pyx_t_2;
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
__Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
/* "(tree fragment)":6
* if __pyx_checksum != 0xef9caf0:
* from pickle import PickleError as __pyx_PickleError
* raise __pyx_PickleError("Incompatible checksums (%s vs 0xef9caf0 = (buffer, current_buffer_size, position, sock))" % __pyx_checksum) # <<<<<<<<<<<<<<
* __pyx_result = BufferedSocketReader.__new__(__pyx_type)
* if __pyx_state is not None:
*/
__pyx_t_2 = __Pyx_PyInt_From_long(__pyx_v___pyx_checksum); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 6, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__pyx_t_4 = __Pyx_PyString_Format(__pyx_kp_s_Incompatible_checksums_s_vs_0xef, __pyx_t_2); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 6, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
__Pyx_INCREF(__pyx_v___pyx_PickleError);
__pyx_t_2 = __pyx_v___pyx_PickleError; __pyx_t_5 = NULL;
if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {
__pyx_t_5 = PyMethod_GET_SELF(__pyx_t_2);
if (likely(__pyx_t_5)) {
PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
__Pyx_INCREF(__pyx_t_5);
__Pyx_INCREF(function);
__Pyx_DECREF_SET(__pyx_t_2, function);
}
}
__pyx_t_3 = (__pyx_t_5) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_5, __pyx_t_4) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_4);
__Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
__Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 6, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
__Pyx_Raise(__pyx_t_3, 0, 0, 0);
__Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
__PYX_ERR(1, 6, __pyx_L1_error)
/* "(tree fragment)":4
* cdef object __pyx_PickleError
* cdef object __pyx_result
* if __pyx_checksum != 0xef9caf0: # <<<<<<<<<<<<<<
* from pickle import PickleError as __pyx_PickleError
* raise __pyx_PickleError("Incompatible checksums (%s vs 0xef9caf0 = (buffer, current_buffer_size, position, sock))" % __pyx_checksum)
*/
}
/* "(tree fragment)":7
* from pickle import PickleError as __pyx_PickleError
* raise __pyx_PickleError("Incompatible checksums (%s vs 0xef9caf0 = (buffer, current_buffer_size, position, sock))" % __pyx_checksum)
* __pyx_result = BufferedSocketReader.__new__(__pyx_type) # <<<<<<<<<<<<<<
* if __pyx_state is not None:
* __pyx_unpickle_BufferedSocketReader__set_state(<BufferedSocketReader> __pyx_result, __pyx_state)
*/
__pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_ptype_17clickhouse_driver_14bufferedreader_BufferedSocketReader), __pyx_n_s_new); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 7, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__pyx_t_4 = NULL;
if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
__pyx_t_4 = PyMethod_GET_SELF(__pyx_t_2);
if (likely(__pyx_t_4)) {
PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
__Pyx_INCREF(__pyx_t_4);
__Pyx_INCREF(function);
__Pyx_DECREF_SET(__pyx_t_2, function);
}
}
__pyx_t_3 = (__pyx_t_4) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_4, __pyx_v___pyx_type) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_v___pyx_type);
__Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 7, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
__pyx_v___pyx_result = __pyx_t_3;
__pyx_t_3 = 0;
/* "(tree fragment)":8
* raise __pyx_PickleError("Incompatible checksums (%s vs 0xef9caf0 = (buffer, current_buffer_size, position, sock))" % __pyx_checksum)
* __pyx_result = BufferedSocketReader.__new__(__pyx_type)
* if __pyx_state is not None: # <<<<<<<<<<<<<<
* __pyx_unpickle_BufferedSocketReader__set_state(<BufferedSocketReader> __pyx_result, __pyx_state)
* return __pyx_result
*/
__pyx_t_1 = (__pyx_v___pyx_state != Py_None);
__pyx_t_6 = (__pyx_t_1 != 0);
if (__pyx_t_6) {
/* "(tree fragment)":9
* __pyx_result = BufferedSocketReader.__new__(__pyx_type)
* if __pyx_state is not None:
* __pyx_unpickle_BufferedSocketReader__set_state(<BufferedSocketReader> __pyx_result, __pyx_state) # <<<<<<<<<<<<<<
* return __pyx_result
* cdef __pyx_unpickle_BufferedSocketReader__set_state(BufferedSocketReader __pyx_result, tuple __pyx_state):
*/
if (!(likely(PyTuple_CheckExact(__pyx_v___pyx_state))||((__pyx_v___pyx_state) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "tuple", Py_TYPE(__pyx_v___pyx_state)->tp_name), 0))) __PYX_ERR(1, 9, __pyx_L1_error)
__pyx_t_3 = __pyx_f_17clickhouse_driver_14bufferedreader___pyx_unpickle_BufferedSocketReader__set_state(((struct __pyx_obj_17clickhouse_driver_14bufferedreader_BufferedSocketReader *)__pyx_v___pyx_result), ((PyObject*)__pyx_v___pyx_state)); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 9, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
/* "(tree fragment)":8
* raise __pyx_PickleError("Incompatible checksums (%s vs 0xef9caf0 = (buffer, current_buffer_size, position, sock))" % __pyx_checksum)
* __pyx_result = BufferedSocketReader.__new__(__pyx_type)
* if __pyx_state is not None: # <<<<<<<<<<<<<<
* __pyx_unpickle_BufferedSocketReader__set_state(<BufferedSocketReader> __pyx_result, __pyx_state)
* return __pyx_result
*/
}
/* "(tree fragment)":10
* if __pyx_state is not None:
* __pyx_unpickle_BufferedSocketReader__set_state(<BufferedSocketReader> __pyx_result, __pyx_state)
* return __pyx_result # <<<<<<<<<<<<<<
* cdef __pyx_unpickle_BufferedSocketReader__set_state(BufferedSocketReader __pyx_result, tuple __pyx_state):
* __pyx_result.buffer = __pyx_state[0]; __pyx_result.current_buffer_size = __pyx_state[1]; __pyx_result.position = __pyx_state[2]; __pyx_result.sock = __pyx_state[3]
*/
__Pyx_XDECREF(__pyx_r);
__Pyx_INCREF(__pyx_v___pyx_result);
__pyx_r = __pyx_v___pyx_result;
goto __pyx_L0;
/* "(tree fragment)":1
* def __pyx_unpickle_BufferedSocketReader(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<<
* cdef object __pyx_PickleError
* cdef object __pyx_result
*/
/* function exit code */
__pyx_L1_error:;
__Pyx_XDECREF(__pyx_t_2);
__Pyx_XDECREF(__pyx_t_3);
__Pyx_XDECREF(__pyx_t_4);
__Pyx_XDECREF(__pyx_t_5);
__Pyx_AddTraceback("clickhouse_driver.bufferedreader.__pyx_unpickle_BufferedSocketReader", __pyx_clineno, __pyx_lineno, __pyx_filename);
__pyx_r = NULL;
__pyx_L0:;
__Pyx_XDECREF(__pyx_v___pyx_PickleError);
__Pyx_XDECREF(__pyx_v___pyx_result);
__Pyx_XGIVEREF(__pyx_r);
__Pyx_RefNannyFinishContext();
return __pyx_r;
} | 841 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_pf_17clickhouse_driver_14bufferedreader_4__pyx_unpickle_CompressedBufferedReader | __pyx_pf_17clickhouse_driver_14bufferedreader_4__pyx_unpickle_CompressedBufferedReader( CYTHON_UNUSED PyObject * __pyx_self , PyObject * __pyx_v___pyx_type , long __pyx_v___pyx_checksum , PyObject * __pyx_v___pyx_state) | ['__pyx_self', '__pyx_v___pyx_type', '__pyx_v___pyx_checksum', '__pyx_v___pyx_state'] | static PyObject *__pyx_pf_17clickhouse_driver_14bufferedreader_4__pyx_unpickle_CompressedBufferedReader(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v___pyx_type, long __pyx_v___pyx_checksum, PyObject *__pyx_v___pyx_state) {
PyObject *__pyx_v___pyx_PickleError = 0;
PyObject *__pyx_v___pyx_result = 0;
PyObject *__pyx_r = NULL;
__Pyx_RefNannyDeclarations
int __pyx_t_1;
PyObject *__pyx_t_2 = NULL;
PyObject *__pyx_t_3 = NULL;
PyObject *__pyx_t_4 = NULL;
PyObject *__pyx_t_5 = NULL;
int __pyx_t_6;
__Pyx_RefNannySetupContext("__pyx_unpickle_CompressedBufferedReader", 0);
/* "(tree fragment)":4
* cdef object __pyx_PickleError
* cdef object __pyx_result
* if __pyx_checksum != 0x183c0eb: # <<<<<<<<<<<<<<
* from pickle import PickleError as __pyx_PickleError
* raise __pyx_PickleError("Incompatible checksums (%s vs 0x183c0eb = (buffer, current_buffer_size, position, read_block))" % __pyx_checksum)
*/
__pyx_t_1 = ((__pyx_v___pyx_checksum != 0x183c0eb) != 0);
if (__pyx_t_1) {
/* "(tree fragment)":5
* cdef object __pyx_result
* if __pyx_checksum != 0x183c0eb:
* from pickle import PickleError as __pyx_PickleError # <<<<<<<<<<<<<<
* raise __pyx_PickleError("Incompatible checksums (%s vs 0x183c0eb = (buffer, current_buffer_size, position, read_block))" % __pyx_checksum)
* __pyx_result = CompressedBufferedReader.__new__(__pyx_type)
*/
__pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__Pyx_INCREF(__pyx_n_s_PickleError);
__Pyx_GIVEREF(__pyx_n_s_PickleError);
PyList_SET_ITEM(__pyx_t_2, 0, __pyx_n_s_PickleError);
__pyx_t_3 = __Pyx_Import(__pyx_n_s_pickle, __pyx_t_2, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
__pyx_t_2 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_PickleError); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__Pyx_INCREF(__pyx_t_2);
__pyx_v___pyx_PickleError = __pyx_t_2;
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
__Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
/* "(tree fragment)":6
* if __pyx_checksum != 0x183c0eb:
* from pickle import PickleError as __pyx_PickleError
* raise __pyx_PickleError("Incompatible checksums (%s vs 0x183c0eb = (buffer, current_buffer_size, position, read_block))" % __pyx_checksum) # <<<<<<<<<<<<<<
* __pyx_result = CompressedBufferedReader.__new__(__pyx_type)
* if __pyx_state is not None:
*/
__pyx_t_2 = __Pyx_PyInt_From_long(__pyx_v___pyx_checksum); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 6, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__pyx_t_4 = __Pyx_PyString_Format(__pyx_kp_s_Incompatible_checksums_s_vs_0x18, __pyx_t_2); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 6, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
__Pyx_INCREF(__pyx_v___pyx_PickleError);
__pyx_t_2 = __pyx_v___pyx_PickleError; __pyx_t_5 = NULL;
if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {
__pyx_t_5 = PyMethod_GET_SELF(__pyx_t_2);
if (likely(__pyx_t_5)) {
PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
__Pyx_INCREF(__pyx_t_5);
__Pyx_INCREF(function);
__Pyx_DECREF_SET(__pyx_t_2, function);
}
}
__pyx_t_3 = (__pyx_t_5) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_5, __pyx_t_4) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_4);
__Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
__Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 6, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
__Pyx_Raise(__pyx_t_3, 0, 0, 0);
__Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
__PYX_ERR(1, 6, __pyx_L1_error)
/* "(tree fragment)":4
* cdef object __pyx_PickleError
* cdef object __pyx_result
* if __pyx_checksum != 0x183c0eb: # <<<<<<<<<<<<<<
* from pickle import PickleError as __pyx_PickleError
* raise __pyx_PickleError("Incompatible checksums (%s vs 0x183c0eb = (buffer, current_buffer_size, position, read_block))" % __pyx_checksum)
*/
}
/* "(tree fragment)":7
* from pickle import PickleError as __pyx_PickleError
* raise __pyx_PickleError("Incompatible checksums (%s vs 0x183c0eb = (buffer, current_buffer_size, position, read_block))" % __pyx_checksum)
* __pyx_result = CompressedBufferedReader.__new__(__pyx_type) # <<<<<<<<<<<<<<
* if __pyx_state is not None:
* __pyx_unpickle_CompressedBufferedReader__set_state(<CompressedBufferedReader> __pyx_result, __pyx_state)
*/
__pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_ptype_17clickhouse_driver_14bufferedreader_CompressedBufferedReader), __pyx_n_s_new); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 7, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__pyx_t_4 = NULL;
if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
__pyx_t_4 = PyMethod_GET_SELF(__pyx_t_2);
if (likely(__pyx_t_4)) {
PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
__Pyx_INCREF(__pyx_t_4);
__Pyx_INCREF(function);
__Pyx_DECREF_SET(__pyx_t_2, function);
}
}
__pyx_t_3 = (__pyx_t_4) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_4, __pyx_v___pyx_type) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_v___pyx_type);
__Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 7, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
__pyx_v___pyx_result = __pyx_t_3;
__pyx_t_3 = 0;
/* "(tree fragment)":8
* raise __pyx_PickleError("Incompatible checksums (%s vs 0x183c0eb = (buffer, current_buffer_size, position, read_block))" % __pyx_checksum)
* __pyx_result = CompressedBufferedReader.__new__(__pyx_type)
* if __pyx_state is not None: # <<<<<<<<<<<<<<
* __pyx_unpickle_CompressedBufferedReader__set_state(<CompressedBufferedReader> __pyx_result, __pyx_state)
* return __pyx_result
*/
__pyx_t_1 = (__pyx_v___pyx_state != Py_None);
__pyx_t_6 = (__pyx_t_1 != 0);
if (__pyx_t_6) {
/* "(tree fragment)":9
* __pyx_result = CompressedBufferedReader.__new__(__pyx_type)
* if __pyx_state is not None:
* __pyx_unpickle_CompressedBufferedReader__set_state(<CompressedBufferedReader> __pyx_result, __pyx_state) # <<<<<<<<<<<<<<
* return __pyx_result
* cdef __pyx_unpickle_CompressedBufferedReader__set_state(CompressedBufferedReader __pyx_result, tuple __pyx_state):
*/
if (!(likely(PyTuple_CheckExact(__pyx_v___pyx_state))||((__pyx_v___pyx_state) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "tuple", Py_TYPE(__pyx_v___pyx_state)->tp_name), 0))) __PYX_ERR(1, 9, __pyx_L1_error)
__pyx_t_3 = __pyx_f_17clickhouse_driver_14bufferedreader___pyx_unpickle_CompressedBufferedReader__set_state(((struct __pyx_obj_17clickhouse_driver_14bufferedreader_CompressedBufferedReader *)__pyx_v___pyx_result), ((PyObject*)__pyx_v___pyx_state)); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 9, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
/* "(tree fragment)":8
* raise __pyx_PickleError("Incompatible checksums (%s vs 0x183c0eb = (buffer, current_buffer_size, position, read_block))" % __pyx_checksum)
* __pyx_result = CompressedBufferedReader.__new__(__pyx_type)
* if __pyx_state is not None: # <<<<<<<<<<<<<<
* __pyx_unpickle_CompressedBufferedReader__set_state(<CompressedBufferedReader> __pyx_result, __pyx_state)
* return __pyx_result
*/
}
/* "(tree fragment)":10
* if __pyx_state is not None:
* __pyx_unpickle_CompressedBufferedReader__set_state(<CompressedBufferedReader> __pyx_result, __pyx_state)
* return __pyx_result # <<<<<<<<<<<<<<
* cdef __pyx_unpickle_CompressedBufferedReader__set_state(CompressedBufferedReader __pyx_result, tuple __pyx_state):
* __pyx_result.buffer = __pyx_state[0]; __pyx_result.current_buffer_size = __pyx_state[1]; __pyx_result.position = __pyx_state[2]; __pyx_result.read_block = __pyx_state[3]
*/
__Pyx_XDECREF(__pyx_r);
__Pyx_INCREF(__pyx_v___pyx_result);
__pyx_r = __pyx_v___pyx_result;
goto __pyx_L0;
/* "(tree fragment)":1
* def __pyx_unpickle_CompressedBufferedReader(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<<
* cdef object __pyx_PickleError
* cdef object __pyx_result
*/
/* function exit code */
__pyx_L1_error:;
__Pyx_XDECREF(__pyx_t_2);
__Pyx_XDECREF(__pyx_t_3);
__Pyx_XDECREF(__pyx_t_4);
__Pyx_XDECREF(__pyx_t_5);
__Pyx_AddTraceback("clickhouse_driver.bufferedreader.__pyx_unpickle_CompressedBufferedReader", __pyx_clineno, __pyx_lineno, __pyx_filename);
__pyx_r = NULL;
__pyx_L0:;
__Pyx_XDECREF(__pyx_v___pyx_PickleError);
__Pyx_XDECREF(__pyx_v___pyx_result);
__Pyx_XGIVEREF(__pyx_r);
__Pyx_RefNannyFinishContext();
return __pyx_r;
} | 841 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_pf_17clickhouse_driver_14bufferedreader___pyx_unpickle_BufferedReader | __pyx_pf_17clickhouse_driver_14bufferedreader___pyx_unpickle_BufferedReader( CYTHON_UNUSED PyObject * __pyx_self , PyObject * __pyx_v___pyx_type , long __pyx_v___pyx_checksum , PyObject * __pyx_v___pyx_state) | ['__pyx_self', '__pyx_v___pyx_type', '__pyx_v___pyx_checksum', '__pyx_v___pyx_state'] | static PyObject *__pyx_pf_17clickhouse_driver_14bufferedreader___pyx_unpickle_BufferedReader(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v___pyx_type, long __pyx_v___pyx_checksum, PyObject *__pyx_v___pyx_state) {
PyObject *__pyx_v___pyx_PickleError = 0;
PyObject *__pyx_v___pyx_result = 0;
PyObject *__pyx_r = NULL;
__Pyx_RefNannyDeclarations
int __pyx_t_1;
PyObject *__pyx_t_2 = NULL;
PyObject *__pyx_t_3 = NULL;
PyObject *__pyx_t_4 = NULL;
PyObject *__pyx_t_5 = NULL;
int __pyx_t_6;
__Pyx_RefNannySetupContext("__pyx_unpickle_BufferedReader", 0);
/* "(tree fragment)":4
* cdef object __pyx_PickleError
* cdef object __pyx_result
* if __pyx_checksum != 0x2a8a945: # <<<<<<<<<<<<<<
* from pickle import PickleError as __pyx_PickleError
* raise __pyx_PickleError("Incompatible checksums (%s vs 0x2a8a945 = (buffer, current_buffer_size, position))" % __pyx_checksum)
*/
__pyx_t_1 = ((__pyx_v___pyx_checksum != 0x2a8a945) != 0);
if (__pyx_t_1) {
/* "(tree fragment)":5
* cdef object __pyx_result
* if __pyx_checksum != 0x2a8a945:
* from pickle import PickleError as __pyx_PickleError # <<<<<<<<<<<<<<
* raise __pyx_PickleError("Incompatible checksums (%s vs 0x2a8a945 = (buffer, current_buffer_size, position))" % __pyx_checksum)
* __pyx_result = BufferedReader.__new__(__pyx_type)
*/
__pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__Pyx_INCREF(__pyx_n_s_PickleError);
__Pyx_GIVEREF(__pyx_n_s_PickleError);
PyList_SET_ITEM(__pyx_t_2, 0, __pyx_n_s_PickleError);
__pyx_t_3 = __Pyx_Import(__pyx_n_s_pickle, __pyx_t_2, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
__pyx_t_2 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_PickleError); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__Pyx_INCREF(__pyx_t_2);
__pyx_v___pyx_PickleError = __pyx_t_2;
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
__Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
/* "(tree fragment)":6
* if __pyx_checksum != 0x2a8a945:
* from pickle import PickleError as __pyx_PickleError
* raise __pyx_PickleError("Incompatible checksums (%s vs 0x2a8a945 = (buffer, current_buffer_size, position))" % __pyx_checksum) # <<<<<<<<<<<<<<
* __pyx_result = BufferedReader.__new__(__pyx_type)
* if __pyx_state is not None:
*/
__pyx_t_2 = __Pyx_PyInt_From_long(__pyx_v___pyx_checksum); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 6, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__pyx_t_4 = __Pyx_PyString_Format(__pyx_kp_s_Incompatible_checksums_s_vs_0x2a, __pyx_t_2); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 6, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
__Pyx_INCREF(__pyx_v___pyx_PickleError);
__pyx_t_2 = __pyx_v___pyx_PickleError; __pyx_t_5 = NULL;
if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) {
__pyx_t_5 = PyMethod_GET_SELF(__pyx_t_2);
if (likely(__pyx_t_5)) {
PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
__Pyx_INCREF(__pyx_t_5);
__Pyx_INCREF(function);
__Pyx_DECREF_SET(__pyx_t_2, function);
}
}
__pyx_t_3 = (__pyx_t_5) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_5, __pyx_t_4) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_4);
__Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
__Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 6, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
__Pyx_Raise(__pyx_t_3, 0, 0, 0);
__Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
__PYX_ERR(1, 6, __pyx_L1_error)
/* "(tree fragment)":4
* cdef object __pyx_PickleError
* cdef object __pyx_result
* if __pyx_checksum != 0x2a8a945: # <<<<<<<<<<<<<<
* from pickle import PickleError as __pyx_PickleError
* raise __pyx_PickleError("Incompatible checksums (%s vs 0x2a8a945 = (buffer, current_buffer_size, position))" % __pyx_checksum)
*/
}
/* "(tree fragment)":7
* from pickle import PickleError as __pyx_PickleError
* raise __pyx_PickleError("Incompatible checksums (%s vs 0x2a8a945 = (buffer, current_buffer_size, position))" % __pyx_checksum)
* __pyx_result = BufferedReader.__new__(__pyx_type) # <<<<<<<<<<<<<<
* if __pyx_state is not None:
* __pyx_unpickle_BufferedReader__set_state(<BufferedReader> __pyx_result, __pyx_state)
*/
__pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_ptype_17clickhouse_driver_14bufferedreader_BufferedReader), __pyx_n_s_new); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 7, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__pyx_t_4 = NULL;
if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
__pyx_t_4 = PyMethod_GET_SELF(__pyx_t_2);
if (likely(__pyx_t_4)) {
PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
__Pyx_INCREF(__pyx_t_4);
__Pyx_INCREF(function);
__Pyx_DECREF_SET(__pyx_t_2, function);
}
}
__pyx_t_3 = (__pyx_t_4) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_4, __pyx_v___pyx_type) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_v___pyx_type);
__Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 7, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
__pyx_v___pyx_result = __pyx_t_3;
__pyx_t_3 = 0;
/* "(tree fragment)":8
* raise __pyx_PickleError("Incompatible checksums (%s vs 0x2a8a945 = (buffer, current_buffer_size, position))" % __pyx_checksum)
* __pyx_result = BufferedReader.__new__(__pyx_type)
* if __pyx_state is not None: # <<<<<<<<<<<<<<
* __pyx_unpickle_BufferedReader__set_state(<BufferedReader> __pyx_result, __pyx_state)
* return __pyx_result
*/
__pyx_t_1 = (__pyx_v___pyx_state != Py_None);
__pyx_t_6 = (__pyx_t_1 != 0);
if (__pyx_t_6) {
/* "(tree fragment)":9
* __pyx_result = BufferedReader.__new__(__pyx_type)
* if __pyx_state is not None:
* __pyx_unpickle_BufferedReader__set_state(<BufferedReader> __pyx_result, __pyx_state) # <<<<<<<<<<<<<<
* return __pyx_result
* cdef __pyx_unpickle_BufferedReader__set_state(BufferedReader __pyx_result, tuple __pyx_state):
*/
if (!(likely(PyTuple_CheckExact(__pyx_v___pyx_state))||((__pyx_v___pyx_state) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "tuple", Py_TYPE(__pyx_v___pyx_state)->tp_name), 0))) __PYX_ERR(1, 9, __pyx_L1_error)
__pyx_t_3 = __pyx_f_17clickhouse_driver_14bufferedreader___pyx_unpickle_BufferedReader__set_state(((struct __pyx_obj_17clickhouse_driver_14bufferedreader_BufferedReader *)__pyx_v___pyx_result), ((PyObject*)__pyx_v___pyx_state)); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 9, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
/* "(tree fragment)":8
* raise __pyx_PickleError("Incompatible checksums (%s vs 0x2a8a945 = (buffer, current_buffer_size, position))" % __pyx_checksum)
* __pyx_result = BufferedReader.__new__(__pyx_type)
* if __pyx_state is not None: # <<<<<<<<<<<<<<
* __pyx_unpickle_BufferedReader__set_state(<BufferedReader> __pyx_result, __pyx_state)
* return __pyx_result
*/
}
/* "(tree fragment)":10
* if __pyx_state is not None:
* __pyx_unpickle_BufferedReader__set_state(<BufferedReader> __pyx_result, __pyx_state)
* return __pyx_result # <<<<<<<<<<<<<<
* cdef __pyx_unpickle_BufferedReader__set_state(BufferedReader __pyx_result, tuple __pyx_state):
* __pyx_result.buffer = __pyx_state[0]; __pyx_result.current_buffer_size = __pyx_state[1]; __pyx_result.position = __pyx_state[2]
*/
__Pyx_XDECREF(__pyx_r);
__Pyx_INCREF(__pyx_v___pyx_result);
__pyx_r = __pyx_v___pyx_result;
goto __pyx_L0;
/* "(tree fragment)":1
* def __pyx_unpickle_BufferedReader(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<<
* cdef object __pyx_PickleError
* cdef object __pyx_result
*/
/* function exit code */
__pyx_L1_error:;
__Pyx_XDECREF(__pyx_t_2);
__Pyx_XDECREF(__pyx_t_3);
__Pyx_XDECREF(__pyx_t_4);
__Pyx_XDECREF(__pyx_t_5);
__Pyx_AddTraceback("clickhouse_driver.bufferedreader.__pyx_unpickle_BufferedReader", __pyx_clineno, __pyx_lineno, __pyx_filename);
__pyx_r = NULL;
__pyx_L0:;
__Pyx_XDECREF(__pyx_v___pyx_PickleError);
__Pyx_XDECREF(__pyx_v___pyx_result);
__Pyx_XGIVEREF(__pyx_r);
__Pyx_RefNannyFinishContext();
return __pyx_r;
} | 841 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_pw_17clickhouse_driver_14bufferedreader_14BufferedReader_5read | __pyx_pw_17clickhouse_driver_14bufferedreader_14BufferedReader_5read( PyObject * __pyx_v_self , PyObject * __pyx_arg_unread) | ['__pyx_v_self', '__pyx_arg_unread'] | static PyObject *__pyx_pw_17clickhouse_driver_14bufferedreader_14BufferedReader_5read(PyObject *__pyx_v_self, PyObject *__pyx_arg_unread) {
Py_ssize_t __pyx_v_unread;
PyObject *__pyx_r = 0;
__Pyx_RefNannyDeclarations
__Pyx_RefNannySetupContext("read (wrapper)", 0);
assert(__pyx_arg_unread); {
__pyx_v_unread = __Pyx_PyIndex_AsSsize_t(__pyx_arg_unread); if (unlikely((__pyx_v_unread == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 25, __pyx_L3_error)
}
goto __pyx_L4_argument_unpacking_done;
__pyx_L3_error:;
__Pyx_AddTraceback("clickhouse_driver.bufferedreader.BufferedReader.read", __pyx_clineno, __pyx_lineno, __pyx_filename);
__Pyx_RefNannyFinishContext();
return NULL;
__pyx_L4_argument_unpacking_done:;
__pyx_r = __pyx_pf_17clickhouse_driver_14bufferedreader_14BufferedReader_4read(((struct __pyx_obj_17clickhouse_driver_14bufferedreader_BufferedReader *)__pyx_v_self), ((Py_ssize_t)__pyx_v_unread));
/* function exit code */
__Pyx_RefNannyFinishContext();
return __pyx_r;
} | 125 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_pw_17clickhouse_driver_14bufferedreader_14BufferedReader_9read_strings | __pyx_pw_17clickhouse_driver_14bufferedreader_14BufferedReader_9read_strings( PyObject * __pyx_v_self , PyObject * __pyx_args , PyObject * __pyx_kwds) | ['__pyx_v_self', '__pyx_args', '__pyx_kwds'] | static PyObject *__pyx_pw_17clickhouse_driver_14bufferedreader_14BufferedReader_9read_strings(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
Py_ssize_t __pyx_v_n_items;
PyObject *__pyx_v_encoding = 0;
PyObject *__pyx_r = 0;
__Pyx_RefNannyDeclarations
__Pyx_RefNannySetupContext("read_strings (wrapper)", 0);
{
static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_n_items,&__pyx_n_s_encoding,0};
PyObject* values[2] = {0,0};
values[1] = ((PyObject *)Py_None);
if (unlikely(__pyx_kwds)) {
Py_ssize_t kw_args;
const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
switch (pos_args) {
case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
CYTHON_FALLTHROUGH;
case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
CYTHON_FALLTHROUGH;
case 0: break;
default: goto __pyx_L5_argtuple_error;
}
kw_args = PyDict_Size(__pyx_kwds);
switch (pos_args) {
case 0:
if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_n_items)) != 0)) kw_args--;
else goto __pyx_L5_argtuple_error;
CYTHON_FALLTHROUGH;
case 1:
if (kw_args > 0) {
PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_encoding);
if (value) { values[1] = value; kw_args--; }
}
}
if (unlikely(kw_args > 0)) {
if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "read_strings") < 0)) __PYX_ERR(0, 62, __pyx_L3_error)
}
} else {
switch (PyTuple_GET_SIZE(__pyx_args)) {
case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
CYTHON_FALLTHROUGH;
case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
break;
default: goto __pyx_L5_argtuple_error;
}
}
__pyx_v_n_items = __Pyx_PyIndex_AsSsize_t(values[0]); if (unlikely((__pyx_v_n_items == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 62, __pyx_L3_error)
__pyx_v_encoding = values[1];
}
goto __pyx_L4_argument_unpacking_done;
__pyx_L5_argtuple_error:;
__Pyx_RaiseArgtupleInvalid("read_strings", 0, 1, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 62, __pyx_L3_error)
__pyx_L3_error:;
__Pyx_AddTraceback("clickhouse_driver.bufferedreader.BufferedReader.read_strings", __pyx_clineno, __pyx_lineno, __pyx_filename);
__Pyx_RefNannyFinishContext();
return NULL;
__pyx_L4_argument_unpacking_done:;
__pyx_r = __pyx_pf_17clickhouse_driver_14bufferedreader_14BufferedReader_8read_strings(((struct __pyx_obj_17clickhouse_driver_14bufferedreader_BufferedReader *)__pyx_v_self), __pyx_v_n_items, __pyx_v_encoding);
/* function exit code */
__Pyx_RefNannyFinishContext();
return __pyx_r;
} | 454 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_pymod_exec_bufferedreader | __pyx_pymod_exec_bufferedreader( PyObject * __pyx_pyinit_module) | ['__pyx_pyinit_module'] | static CYTHON_SMALL_CODE int __pyx_pymod_exec_bufferedreader(PyObject *__pyx_pyinit_module)
#endif
#endif
{
PyObject *__pyx_t_1 = NULL;
__Pyx_RefNannyDeclarations
#if CYTHON_PEP489_MULTI_PHASE_INIT
if (__pyx_m) {
if (__pyx_m == __pyx_pyinit_module) return 0;
PyErr_SetString(PyExc_RuntimeError, "Module 'bufferedreader' has already been imported. Re-initialisation is not supported.");
return -1;
}
#elif PY_MAJOR_VERSION >= 3
if (__pyx_m) return __Pyx_NewRef(__pyx_m);
#endif
#if CYTHON_REFNANNY
__Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny");
if (!__Pyx_RefNanny) {
PyErr_Clear();
__Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny");
if (!__Pyx_RefNanny)
Py_FatalError("failed to import 'refnanny' module");
}
#endif
__Pyx_RefNannySetupContext("__Pyx_PyMODINIT_FUNC PyInit_bufferedreader(void)", 0);
if (__Pyx_check_binary_version() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#ifdef __Pxy_PyFrame_Initialize_Offsets
__Pxy_PyFrame_Initialize_Offsets();
#endif
__pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) __PYX_ERR(0, 1, __pyx_L1_error)
__pyx_empty_bytes = PyBytes_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) __PYX_ERR(0, 1, __pyx_L1_error)
__pyx_empty_unicode = PyUnicode_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_unicode)) __PYX_ERR(0, 1, __pyx_L1_error)
#ifdef __Pyx_CyFunction_USED
if (__pyx_CyFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
#ifdef __Pyx_FusedFunction_USED
if (__pyx_FusedFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
#ifdef __Pyx_Coroutine_USED
if (__pyx_Coroutine_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
#ifdef __Pyx_Generator_USED
if (__pyx_Generator_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
#ifdef __Pyx_AsyncGen_USED
if (__pyx_AsyncGen_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
#ifdef __Pyx_StopAsyncIteration_USED
if (__pyx_StopAsyncIteration_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
/*--- Library function declarations ---*/
/*--- Threads initialization code ---*/
#if defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS
#ifdef WITH_THREAD /* Python build with threading support? */
PyEval_InitThreads();
#endif
#endif
/*--- Module creation code ---*/
#if CYTHON_PEP489_MULTI_PHASE_INIT
__pyx_m = __pyx_pyinit_module;
Py_INCREF(__pyx_m);
#else
#if PY_MAJOR_VERSION < 3
__pyx_m = Py_InitModule4("bufferedreader", __pyx_methods, 0, 0, PYTHON_API_VERSION); Py_XINCREF(__pyx_m);
#else
__pyx_m = PyModule_Create(&__pyx_moduledef);
#endif
if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
__pyx_d = PyModule_GetDict(__pyx_m); if (unlikely(!__pyx_d)) __PYX_ERR(0, 1, __pyx_L1_error)
Py_INCREF(__pyx_d);
__pyx_b = PyImport_AddModule(__Pyx_BUILTIN_MODULE_NAME); if (unlikely(!__pyx_b)) __PYX_ERR(0, 1, __pyx_L1_error)
Py_INCREF(__pyx_b);
__pyx_cython_runtime = PyImport_AddModule((char *) "cython_runtime"); if (unlikely(!__pyx_cython_runtime)) __PYX_ERR(0, 1, __pyx_L1_error)
Py_INCREF(__pyx_cython_runtime);
if (PyObject_SetAttrString(__pyx_m, "__builtins__", __pyx_b) < 0) __PYX_ERR(0, 1, __pyx_L1_error);
/*--- Initialize various global constants etc. ---*/
if (__Pyx_InitGlobals() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#if PY_MAJOR_VERSION < 3 && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT)
if (__Pyx_init_sys_getdefaultencoding_params() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
if (__pyx_module_is_main_clickhouse_driver__bufferedreader) {
if (PyObject_SetAttr(__pyx_m, __pyx_n_s_name, __pyx_n_s_main) < 0) __PYX_ERR(0, 1, __pyx_L1_error)
}
#if PY_MAJOR_VERSION >= 3
{
PyObject *modules = PyImport_GetModuleDict(); if (unlikely(!modules)) __PYX_ERR(0, 1, __pyx_L1_error)
if (!PyDict_GetItemString(modules, "clickhouse_driver.bufferedreader")) {
if (unlikely(PyDict_SetItemString(modules, "clickhouse_driver.bufferedreader", __pyx_m) < 0)) __PYX_ERR(0, 1, __pyx_L1_error)
}
}
#endif
/*--- Builtin init code ---*/
if (__Pyx_InitCachedBuiltins() < 0) goto __pyx_L1_error;
/*--- Constants init code ---*/
if (__Pyx_InitCachedConstants() < 0) goto __pyx_L1_error;
/*--- Global type/function init code ---*/
(void)__Pyx_modinit_global_init_code();
(void)__Pyx_modinit_variable_export_code();
(void)__Pyx_modinit_function_export_code();
if (unlikely(__Pyx_modinit_type_init_code() != 0)) goto __pyx_L1_error;
if (unlikely(__Pyx_modinit_type_import_code() != 0)) goto __pyx_L1_error;
(void)__Pyx_modinit_variable_import_code();
(void)__Pyx_modinit_function_import_code();
/*--- Execution code ---*/
#if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED)
if (__Pyx_patch_abc() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
/* "(tree fragment)":1
* def __pyx_unpickle_BufferedReader(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<<
* cdef object __pyx_PickleError
* cdef object __pyx_result
*/
__pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_17clickhouse_driver_14bufferedreader_1__pyx_unpickle_BufferedReader, NULL, __pyx_n_s_clickhouse_driver_bufferedreader); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
if (PyDict_SetItem(__pyx_d, __pyx_n_s_pyx_unpickle_BufferedReader, __pyx_t_1) < 0) __PYX_ERR(1, 1, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
/* "(tree fragment)":11
* __pyx_unpickle_BufferedReader__set_state(<BufferedReader> __pyx_result, __pyx_state)
* return __pyx_result
* cdef __pyx_unpickle_BufferedReader__set_state(BufferedReader __pyx_result, tuple __pyx_state): # <<<<<<<<<<<<<<
* __pyx_result.buffer = __pyx_state[0]; __pyx_result.current_buffer_size = __pyx_state[1]; __pyx_result.position = __pyx_state[2]
* if len(__pyx_state) > 3 and hasattr(__pyx_result, '__dict__'):
*/
__pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_17clickhouse_driver_14bufferedreader_3__pyx_unpickle_BufferedSocketReader, NULL, __pyx_n_s_clickhouse_driver_bufferedreader); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
if (PyDict_SetItem(__pyx_d, __pyx_n_s_pyx_unpickle_BufferedSocketRea, __pyx_t_1) < 0) __PYX_ERR(1, 1, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
/* "(tree fragment)":1
* def __pyx_unpickle_CompressedBufferedReader(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<<
* cdef object __pyx_PickleError
* cdef object __pyx_result
*/
__pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_17clickhouse_driver_14bufferedreader_5__pyx_unpickle_CompressedBufferedReader, NULL, __pyx_n_s_clickhouse_driver_bufferedreader); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
if (PyDict_SetItem(__pyx_d, __pyx_n_s_pyx_unpickle_CompressedBuffere, __pyx_t_1) < 0) __PYX_ERR(1, 1, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
/* "clickhouse_driver/bufferedreader.pyx":1
* from cpython cimport Py_INCREF, PyBytes_FromStringAndSize # <<<<<<<<<<<<<<
* from cpython.bytearray cimport PyByteArray_AsString
* # Using python's versions of pure c memory management functions for
*/
__pyx_t_1 = __Pyx_PyDict_NewPresized(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
if (PyDict_SetItem(__pyx_d, __pyx_n_s_test, __pyx_t_1) < 0) __PYX_ERR(0, 1, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
/*--- Wrapped vars code ---*/
goto __pyx_L0;
__pyx_L1_error:;
__Pyx_XDECREF(__pyx_t_1);
if (__pyx_m) {
if (__pyx_d) {
__Pyx_AddTraceback("init clickhouse_driver.bufferedreader", __pyx_clineno, __pyx_lineno, __pyx_filename);
}
Py_CLEAR(__pyx_m);
} else if (!PyErr_Occurred()) {
PyErr_SetString(PyExc_ImportError, "init clickhouse_driver.bufferedreader");
}
__pyx_L0:;
__Pyx_RefNannyFinishContext();
#if CYTHON_PEP489_MULTI_PHASE_INIT
return (__pyx_m != NULL) ? 0 : -1;
#elif PY_MAJOR_VERSION >= 3
return __pyx_m;
#else
return;
#endif
} | 998 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_f_17clickhouse_driver_14bufferedwriter_14BufferedWriter_write | __pyx_f_17clickhouse_driver_14bufferedwriter_14BufferedWriter_write( struct __pyx_obj_17clickhouse_driver_14bufferedwriter_BufferedWriter * __pyx_v_self , PyObject * __pyx_v_data , int __pyx_skip_dispatch) | ['__pyx_v_self', '__pyx_v_data', '__pyx_skip_dispatch'] | static PyObject *__pyx_f_17clickhouse_driver_14bufferedwriter_14BufferedWriter_write(struct __pyx_obj_17clickhouse_driver_14bufferedwriter_BufferedWriter *__pyx_v_self, PyObject *__pyx_v_data, int __pyx_skip_dispatch) {
Py_ssize_t __pyx_v_written;
Py_ssize_t __pyx_v_size;
Py_ssize_t __pyx_v_data_len;
char *__pyx_v_c_data;
PyObject *__pyx_r = NULL;
__Pyx_RefNannyDeclarations
PyObject *__pyx_t_1 = NULL;
PyObject *__pyx_t_2 = NULL;
PyObject *__pyx_t_3 = NULL;
PyObject *__pyx_t_4 = NULL;
Py_ssize_t __pyx_t_5;
char *__pyx_t_6;
int __pyx_t_7;
Py_ssize_t __pyx_t_8;
Py_ssize_t __pyx_t_9;
__Pyx_RefNannySetupContext("write", 0);
/* Check if called by wrapper */
if (unlikely(__pyx_skip_dispatch)) ;
/* Check if overridden in Python */
else if (unlikely((Py_TYPE(((PyObject *)__pyx_v_self))->tp_dictoffset != 0) || (Py_TYPE(((PyObject *)__pyx_v_self))->tp_flags & (Py_TPFLAGS_IS_ABSTRACT | Py_TPFLAGS_HEAPTYPE)))) {
#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_PYTYPE_LOOKUP && CYTHON_USE_TYPE_SLOTS
static PY_UINT64_T __pyx_tp_dict_version = __PYX_DICT_VERSION_INIT, __pyx_obj_dict_version = __PYX_DICT_VERSION_INIT;
if (unlikely(!__Pyx_object_dict_version_matches(((PyObject *)__pyx_v_self), __pyx_tp_dict_version, __pyx_obj_dict_version))) {
PY_UINT64_T __pyx_type_dict_guard = __Pyx_get_tp_dict_version(((PyObject *)__pyx_v_self));
#endif
__pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_write); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 28, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
if (!PyCFunction_Check(__pyx_t_1) || (PyCFunction_GET_FUNCTION(__pyx_t_1) != (PyCFunction)(void*)__pyx_pw_17clickhouse_driver_14bufferedwriter_14BufferedWriter_7write)) {
__Pyx_XDECREF(__pyx_r);
__Pyx_INCREF(__pyx_t_1);
__pyx_t_3 = __pyx_t_1; __pyx_t_4 = NULL;
if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) {
__pyx_t_4 = PyMethod_GET_SELF(__pyx_t_3);
if (likely(__pyx_t_4)) {
PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);
__Pyx_INCREF(__pyx_t_4);
__Pyx_INCREF(function);
__Pyx_DECREF_SET(__pyx_t_3, function);
}
}
__pyx_t_2 = (__pyx_t_4) ? __Pyx_PyObject_Call2Args(__pyx_t_3, __pyx_t_4, __pyx_v_data) : __Pyx_PyObject_CallOneArg(__pyx_t_3, __pyx_v_data);
__Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 28, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
__pyx_r = __pyx_t_2;
__pyx_t_2 = 0;
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
goto __pyx_L0;
}
#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_PYTYPE_LOOKUP && CYTHON_USE_TYPE_SLOTS
__pyx_tp_dict_version = __Pyx_get_tp_dict_version(((PyObject *)__pyx_v_self));
__pyx_obj_dict_version = __Pyx_get_object_dict_version(((PyObject *)__pyx_v_self));
if (unlikely(__pyx_type_dict_guard != __pyx_tp_dict_version)) {
__pyx_tp_dict_version = __pyx_obj_dict_version = __PYX_DICT_VERSION_INIT;
}
#endif
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_PYTYPE_LOOKUP && CYTHON_USE_TYPE_SLOTS
}
#endif
}
/* "clickhouse_driver/bufferedwriter.pyx":29
*
* cpdef write(self, data):
* cdef Py_ssize_t written = 0 # <<<<<<<<<<<<<<
* cdef Py_ssize_t to_write, size
* cdef Py_ssize_t data_len = len(data)
*/
__pyx_v_written = 0;
/* "clickhouse_driver/bufferedwriter.pyx":31
* cdef Py_ssize_t written = 0
* cdef Py_ssize_t to_write, size
* cdef Py_ssize_t data_len = len(data) # <<<<<<<<<<<<<<
* cdef char* c_data
*
*/
__pyx_t_5 = PyObject_Length(__pyx_v_data); if (unlikely(__pyx_t_5 == ((Py_ssize_t)-1))) __PYX_ERR(0, 31, __pyx_L1_error)
__pyx_v_data_len = __pyx_t_5;
/* "clickhouse_driver/bufferedwriter.pyx":34
* cdef char* c_data
*
* c_data = PyBytes_AsString(data) # <<<<<<<<<<<<<<
*
* while written < data_len:
*/
__pyx_t_6 = PyBytes_AsString(__pyx_v_data); if (unlikely(__pyx_t_6 == ((char *)NULL))) __PYX_ERR(0, 34, __pyx_L1_error)
__pyx_v_c_data = __pyx_t_6;
/* "clickhouse_driver/bufferedwriter.pyx":36
* c_data = PyBytes_AsString(data)
*
* while written < data_len: # <<<<<<<<<<<<<<
* size = min(data_len - written, self.buffer_size - self.position)
* memcpy(&self.buffer[self.position], &c_data[written], size)
*/
while (1) {
__pyx_t_7 = ((__pyx_v_written < __pyx_v_data_len) != 0);
if (!__pyx_t_7) break;
/* "clickhouse_driver/bufferedwriter.pyx":37
*
* while written < data_len:
* size = min(data_len - written, self.buffer_size - self.position) # <<<<<<<<<<<<<<
* memcpy(&self.buffer[self.position], &c_data[written], size)
*
*/
__pyx_t_5 = (__pyx_v_self->buffer_size - __pyx_v_self->position);
__pyx_t_8 = (__pyx_v_data_len - __pyx_v_written);
if (((__pyx_t_5 < __pyx_t_8) != 0)) {
__pyx_t_9 = __pyx_t_5;
} else {
__pyx_t_9 = __pyx_t_8;
}
__pyx_v_size = __pyx_t_9;
/* "clickhouse_driver/bufferedwriter.pyx":38
* while written < data_len:
* size = min(data_len - written, self.buffer_size - self.position)
* memcpy(&self.buffer[self.position], &c_data[written], size) # <<<<<<<<<<<<<<
*
* if self.position == self.buffer_size:
*/
(void)(memcpy((&(__pyx_v_self->buffer[__pyx_v_self->position])), (&(__pyx_v_c_data[__pyx_v_written])), __pyx_v_size));
/* "clickhouse_driver/bufferedwriter.pyx":40
* memcpy(&self.buffer[self.position], &c_data[written], size)
*
* if self.position == self.buffer_size: # <<<<<<<<<<<<<<
* self.write_into_stream()
*
*/
__pyx_t_7 = ((__pyx_v_self->position == __pyx_v_self->buffer_size) != 0);
if (__pyx_t_7) {
/* "clickhouse_driver/bufferedwriter.pyx":41
*
* if self.position == self.buffer_size:
* self.write_into_stream() # <<<<<<<<<<<<<<
*
* self.position += size
*/
__pyx_t_1 = ((struct __pyx_vtabstruct_17clickhouse_driver_14bufferedwriter_BufferedWriter *)__pyx_v_self->__pyx_vtab)->write_into_stream(__pyx_v_self, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 41, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
/* "clickhouse_driver/bufferedwriter.pyx":40
* memcpy(&self.buffer[self.position], &c_data[written], size)
*
* if self.position == self.buffer_size: # <<<<<<<<<<<<<<
* self.write_into_stream()
*
*/
}
/* "clickhouse_driver/bufferedwriter.pyx":43
* self.write_into_stream()
*
* self.position += size # <<<<<<<<<<<<<<
* written += size
*
*/
__pyx_v_self->position = (__pyx_v_self->position + __pyx_v_size);
/* "clickhouse_driver/bufferedwriter.pyx":44
*
* self.position += size
* written += size # <<<<<<<<<<<<<<
*
* def flush(self):
*/
__pyx_v_written = (__pyx_v_written + __pyx_v_size);
}
/* "clickhouse_driver/bufferedwriter.pyx":28
* raise NotImplementedError
*
* cpdef write(self, data): # <<<<<<<<<<<<<<
* cdef Py_ssize_t written = 0
* cdef Py_ssize_t to_write, size
*/
/* function exit code */
__pyx_r = Py_None; __Pyx_INCREF(Py_None);
goto __pyx_L0;
__pyx_L1_error:;
__Pyx_XDECREF(__pyx_t_1);
__Pyx_XDECREF(__pyx_t_2);
__Pyx_XDECREF(__pyx_t_3);
__Pyx_XDECREF(__pyx_t_4);
__Pyx_AddTraceback("clickhouse_driver.bufferedwriter.BufferedWriter.write", __pyx_clineno, __pyx_lineno, __pyx_filename);
__pyx_r = 0;
__pyx_L0:;
__Pyx_XGIVEREF(__pyx_r);
__Pyx_RefNannyFinishContext();
return __pyx_r;
} | 790 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_f_17clickhouse_driver_14bufferedwriter___pyx_unpickle_BufferedSocketWriter__set_state | __pyx_f_17clickhouse_driver_14bufferedwriter___pyx_unpickle_BufferedSocketWriter__set_state( struct __pyx_obj_17clickhouse_driver_14bufferedwriter_BufferedSocketWriter * __pyx_v___pyx_result , PyObject * __pyx_v___pyx_state) | ['__pyx_v___pyx_result', '__pyx_v___pyx_state'] | static PyObject *__pyx_f_17clickhouse_driver_14bufferedwriter___pyx_unpickle_BufferedSocketWriter__set_state(struct __pyx_obj_17clickhouse_driver_14bufferedwriter_BufferedSocketWriter *__pyx_v___pyx_result, PyObject *__pyx_v___pyx_state) {
PyObject *__pyx_r = NULL;
__Pyx_RefNannyDeclarations
PyObject *__pyx_t_1 = NULL;
char *__pyx_t_2;
Py_ssize_t __pyx_t_3;
int __pyx_t_4;
int __pyx_t_5;
int __pyx_t_6;
PyObject *__pyx_t_7 = NULL;
PyObject *__pyx_t_8 = NULL;
PyObject *__pyx_t_9 = NULL;
__Pyx_RefNannySetupContext("__pyx_unpickle_BufferedSocketWriter__set_state", 0);
/* "(tree fragment)":12
* return __pyx_result
* cdef __pyx_unpickle_BufferedSocketWriter__set_state(BufferedSocketWriter __pyx_result, tuple __pyx_state):
* __pyx_result.buffer = __pyx_state[0]; __pyx_result.buffer_size = __pyx_state[1]; __pyx_result.position = __pyx_state[2]; __pyx_result.sock = __pyx_state[3] # <<<<<<<<<<<<<<
* if len(__pyx_state) > 4 and hasattr(__pyx_result, '__dict__'):
* __pyx_result.__dict__.update(__pyx_state[4])
*/
if (unlikely(__pyx_v___pyx_state == Py_None)) {
PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable");
__PYX_ERR(1, 12, __pyx_L1_error)
}
__pyx_t_1 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__pyx_t_2 = __Pyx_PyObject_AsWritableString(__pyx_t_1); if (unlikely((!__pyx_t_2) && PyErr_Occurred())) __PYX_ERR(1, 12, __pyx_L1_error)
__pyx_v___pyx_result->__pyx_base.buffer = __pyx_t_2;
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
if (unlikely(__pyx_v___pyx_state == Py_None)) {
PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable");
__PYX_ERR(1, 12, __pyx_L1_error)
}
__pyx_t_1 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__pyx_t_3 = __Pyx_PyIndex_AsSsize_t(__pyx_t_1); if (unlikely((__pyx_t_3 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
__pyx_v___pyx_result->__pyx_base.buffer_size = __pyx_t_3;
if (unlikely(__pyx_v___pyx_state == Py_None)) {
PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable");
__PYX_ERR(1, 12, __pyx_L1_error)
}
__pyx_t_1 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 2, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__pyx_t_3 = __Pyx_PyIndex_AsSsize_t(__pyx_t_1); if (unlikely((__pyx_t_3 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
__pyx_v___pyx_result->__pyx_base.position = __pyx_t_3;
if (unlikely(__pyx_v___pyx_state == Py_None)) {
PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable");
__PYX_ERR(1, 12, __pyx_L1_error)
}
__pyx_t_1 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 3, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__Pyx_GIVEREF(__pyx_t_1);
__Pyx_GOTREF(__pyx_v___pyx_result->sock);
__Pyx_DECREF(__pyx_v___pyx_result->sock);
__pyx_v___pyx_result->sock = __pyx_t_1;
__pyx_t_1 = 0;
/* "(tree fragment)":13
* cdef __pyx_unpickle_BufferedSocketWriter__set_state(BufferedSocketWriter __pyx_result, tuple __pyx_state):
* __pyx_result.buffer = __pyx_state[0]; __pyx_result.buffer_size = __pyx_state[1]; __pyx_result.position = __pyx_state[2]; __pyx_result.sock = __pyx_state[3]
* if len(__pyx_state) > 4 and hasattr(__pyx_result, '__dict__'): # <<<<<<<<<<<<<<
* __pyx_result.__dict__.update(__pyx_state[4])
*/
if (unlikely(__pyx_v___pyx_state == Py_None)) {
PyErr_SetString(PyExc_TypeError, "object of type 'NoneType' has no len()");
__PYX_ERR(1, 13, __pyx_L1_error)
}
__pyx_t_3 = PyTuple_GET_SIZE(__pyx_v___pyx_state); if (unlikely(__pyx_t_3 == ((Py_ssize_t)-1))) __PYX_ERR(1, 13, __pyx_L1_error)
__pyx_t_5 = ((__pyx_t_3 > 4) != 0);
if (__pyx_t_5) {
} else {
__pyx_t_4 = __pyx_t_5;
goto __pyx_L4_bool_binop_done;
}
__pyx_t_5 = __Pyx_HasAttr(((PyObject *)__pyx_v___pyx_result), __pyx_n_s_dict); if (unlikely(__pyx_t_5 == ((int)-1))) __PYX_ERR(1, 13, __pyx_L1_error)
__pyx_t_6 = (__pyx_t_5 != 0);
__pyx_t_4 = __pyx_t_6;
__pyx_L4_bool_binop_done:;
if (__pyx_t_4) {
/* "(tree fragment)":14
* __pyx_result.buffer = __pyx_state[0]; __pyx_result.buffer_size = __pyx_state[1]; __pyx_result.position = __pyx_state[2]; __pyx_result.sock = __pyx_state[3]
* if len(__pyx_state) > 4 and hasattr(__pyx_result, '__dict__'):
* __pyx_result.__dict__.update(__pyx_state[4]) # <<<<<<<<<<<<<<
*/
__pyx_t_7 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v___pyx_result), __pyx_n_s_dict); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 14, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_7);
__pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_update); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 14, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_8);
__Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
if (unlikely(__pyx_v___pyx_state == Py_None)) {
PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable");
__PYX_ERR(1, 14, __pyx_L1_error)
}
__pyx_t_7 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 4, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 14, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_7);
__pyx_t_9 = NULL;
if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_8))) {
__pyx_t_9 = PyMethod_GET_SELF(__pyx_t_8);
if (likely(__pyx_t_9)) {
PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_8);
__Pyx_INCREF(__pyx_t_9);
__Pyx_INCREF(function);
__Pyx_DECREF_SET(__pyx_t_8, function);
}
}
__pyx_t_1 = (__pyx_t_9) ? __Pyx_PyObject_Call2Args(__pyx_t_8, __pyx_t_9, __pyx_t_7) : __Pyx_PyObject_CallOneArg(__pyx_t_8, __pyx_t_7);
__Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0;
__Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 14, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
/* "(tree fragment)":13
* cdef __pyx_unpickle_BufferedSocketWriter__set_state(BufferedSocketWriter __pyx_result, tuple __pyx_state):
* __pyx_result.buffer = __pyx_state[0]; __pyx_result.buffer_size = __pyx_state[1]; __pyx_result.position = __pyx_state[2]; __pyx_result.sock = __pyx_state[3]
* if len(__pyx_state) > 4 and hasattr(__pyx_result, '__dict__'): # <<<<<<<<<<<<<<
* __pyx_result.__dict__.update(__pyx_state[4])
*/
}
/* "(tree fragment)":11
* __pyx_unpickle_BufferedSocketWriter__set_state(<BufferedSocketWriter> __pyx_result, __pyx_state)
* return __pyx_result
* cdef __pyx_unpickle_BufferedSocketWriter__set_state(BufferedSocketWriter __pyx_result, tuple __pyx_state): # <<<<<<<<<<<<<<
* __pyx_result.buffer = __pyx_state[0]; __pyx_result.buffer_size = __pyx_state[1]; __pyx_result.position = __pyx_state[2]; __pyx_result.sock = __pyx_state[3]
* if len(__pyx_state) > 4 and hasattr(__pyx_result, '__dict__'):
*/
/* function exit code */
__pyx_r = Py_None; __Pyx_INCREF(Py_None);
goto __pyx_L0;
__pyx_L1_error:;
__Pyx_XDECREF(__pyx_t_1);
__Pyx_XDECREF(__pyx_t_7);
__Pyx_XDECREF(__pyx_t_8);
__Pyx_XDECREF(__pyx_t_9);
__Pyx_AddTraceback("clickhouse_driver.bufferedwriter.__pyx_unpickle_BufferedSocketWriter__set_state", __pyx_clineno, __pyx_lineno, __pyx_filename);
__pyx_r = 0;
__pyx_L0:;
__Pyx_XGIVEREF(__pyx_r);
__Pyx_RefNannyFinishContext();
return __pyx_r;
} | 1006 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_f_17clickhouse_driver_14bufferedwriter___pyx_unpickle_BufferedWriter__set_state | __pyx_f_17clickhouse_driver_14bufferedwriter___pyx_unpickle_BufferedWriter__set_state( struct __pyx_obj_17clickhouse_driver_14bufferedwriter_BufferedWriter * __pyx_v___pyx_result , PyObject * __pyx_v___pyx_state) | ['__pyx_v___pyx_result', '__pyx_v___pyx_state'] | static PyObject *__pyx_f_17clickhouse_driver_14bufferedwriter___pyx_unpickle_BufferedWriter__set_state(struct __pyx_obj_17clickhouse_driver_14bufferedwriter_BufferedWriter *__pyx_v___pyx_result, PyObject *__pyx_v___pyx_state) {
PyObject *__pyx_r = NULL;
__Pyx_RefNannyDeclarations
PyObject *__pyx_t_1 = NULL;
char *__pyx_t_2;
Py_ssize_t __pyx_t_3;
int __pyx_t_4;
int __pyx_t_5;
int __pyx_t_6;
PyObject *__pyx_t_7 = NULL;
PyObject *__pyx_t_8 = NULL;
PyObject *__pyx_t_9 = NULL;
__Pyx_RefNannySetupContext("__pyx_unpickle_BufferedWriter__set_state", 0);
/* "(tree fragment)":12
* return __pyx_result
* cdef __pyx_unpickle_BufferedWriter__set_state(BufferedWriter __pyx_result, tuple __pyx_state):
* __pyx_result.buffer = __pyx_state[0]; __pyx_result.buffer_size = __pyx_state[1]; __pyx_result.position = __pyx_state[2] # <<<<<<<<<<<<<<
* if len(__pyx_state) > 3 and hasattr(__pyx_result, '__dict__'):
* __pyx_result.__dict__.update(__pyx_state[3])
*/
if (unlikely(__pyx_v___pyx_state == Py_None)) {
PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable");
__PYX_ERR(1, 12, __pyx_L1_error)
}
__pyx_t_1 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__pyx_t_2 = __Pyx_PyObject_AsWritableString(__pyx_t_1); if (unlikely((!__pyx_t_2) && PyErr_Occurred())) __PYX_ERR(1, 12, __pyx_L1_error)
__pyx_v___pyx_result->buffer = __pyx_t_2;
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
if (unlikely(__pyx_v___pyx_state == Py_None)) {
PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable");
__PYX_ERR(1, 12, __pyx_L1_error)
}
__pyx_t_1 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__pyx_t_3 = __Pyx_PyIndex_AsSsize_t(__pyx_t_1); if (unlikely((__pyx_t_3 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
__pyx_v___pyx_result->buffer_size = __pyx_t_3;
if (unlikely(__pyx_v___pyx_state == Py_None)) {
PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable");
__PYX_ERR(1, 12, __pyx_L1_error)
}
__pyx_t_1 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 2, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__pyx_t_3 = __Pyx_PyIndex_AsSsize_t(__pyx_t_1); if (unlikely((__pyx_t_3 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
__pyx_v___pyx_result->position = __pyx_t_3;
/* "(tree fragment)":13
* cdef __pyx_unpickle_BufferedWriter__set_state(BufferedWriter __pyx_result, tuple __pyx_state):
* __pyx_result.buffer = __pyx_state[0]; __pyx_result.buffer_size = __pyx_state[1]; __pyx_result.position = __pyx_state[2]
* if len(__pyx_state) > 3 and hasattr(__pyx_result, '__dict__'): # <<<<<<<<<<<<<<
* __pyx_result.__dict__.update(__pyx_state[3])
*/
if (unlikely(__pyx_v___pyx_state == Py_None)) {
PyErr_SetString(PyExc_TypeError, "object of type 'NoneType' has no len()");
__PYX_ERR(1, 13, __pyx_L1_error)
}
__pyx_t_3 = PyTuple_GET_SIZE(__pyx_v___pyx_state); if (unlikely(__pyx_t_3 == ((Py_ssize_t)-1))) __PYX_ERR(1, 13, __pyx_L1_error)
__pyx_t_5 = ((__pyx_t_3 > 3) != 0);
if (__pyx_t_5) {
} else {
__pyx_t_4 = __pyx_t_5;
goto __pyx_L4_bool_binop_done;
}
__pyx_t_5 = __Pyx_HasAttr(((PyObject *)__pyx_v___pyx_result), __pyx_n_s_dict); if (unlikely(__pyx_t_5 == ((int)-1))) __PYX_ERR(1, 13, __pyx_L1_error)
__pyx_t_6 = (__pyx_t_5 != 0);
__pyx_t_4 = __pyx_t_6;
__pyx_L4_bool_binop_done:;
if (__pyx_t_4) {
/* "(tree fragment)":14
* __pyx_result.buffer = __pyx_state[0]; __pyx_result.buffer_size = __pyx_state[1]; __pyx_result.position = __pyx_state[2]
* if len(__pyx_state) > 3 and hasattr(__pyx_result, '__dict__'):
* __pyx_result.__dict__.update(__pyx_state[3]) # <<<<<<<<<<<<<<
*/
__pyx_t_7 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v___pyx_result), __pyx_n_s_dict); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 14, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_7);
__pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_update); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 14, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_8);
__Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
if (unlikely(__pyx_v___pyx_state == Py_None)) {
PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable");
__PYX_ERR(1, 14, __pyx_L1_error)
}
__pyx_t_7 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 3, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 14, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_7);
__pyx_t_9 = NULL;
if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_8))) {
__pyx_t_9 = PyMethod_GET_SELF(__pyx_t_8);
if (likely(__pyx_t_9)) {
PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_8);
__Pyx_INCREF(__pyx_t_9);
__Pyx_INCREF(function);
__Pyx_DECREF_SET(__pyx_t_8, function);
}
}
__pyx_t_1 = (__pyx_t_9) ? __Pyx_PyObject_Call2Args(__pyx_t_8, __pyx_t_9, __pyx_t_7) : __Pyx_PyObject_CallOneArg(__pyx_t_8, __pyx_t_7);
__Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0;
__Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 14, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
/* "(tree fragment)":13
* cdef __pyx_unpickle_BufferedWriter__set_state(BufferedWriter __pyx_result, tuple __pyx_state):
* __pyx_result.buffer = __pyx_state[0]; __pyx_result.buffer_size = __pyx_state[1]; __pyx_result.position = __pyx_state[2]
* if len(__pyx_state) > 3 and hasattr(__pyx_result, '__dict__'): # <<<<<<<<<<<<<<
* __pyx_result.__dict__.update(__pyx_state[3])
*/
}
/* "(tree fragment)":11
* __pyx_unpickle_BufferedWriter__set_state(<BufferedWriter> __pyx_result, __pyx_state)
* return __pyx_result
* cdef __pyx_unpickle_BufferedWriter__set_state(BufferedWriter __pyx_result, tuple __pyx_state): # <<<<<<<<<<<<<<
* __pyx_result.buffer = __pyx_state[0]; __pyx_result.buffer_size = __pyx_state[1]; __pyx_result.position = __pyx_state[2]
* if len(__pyx_state) > 3 and hasattr(__pyx_result, '__dict__'):
*/
/* function exit code */
__pyx_r = Py_None; __Pyx_INCREF(Py_None);
goto __pyx_L0;
__pyx_L1_error:;
__Pyx_XDECREF(__pyx_t_1);
__Pyx_XDECREF(__pyx_t_7);
__Pyx_XDECREF(__pyx_t_8);
__Pyx_XDECREF(__pyx_t_9);
__Pyx_AddTraceback("clickhouse_driver.bufferedwriter.__pyx_unpickle_BufferedWriter__set_state", __pyx_clineno, __pyx_lineno, __pyx_filename);
__pyx_r = 0;
__pyx_L0:;
__Pyx_XGIVEREF(__pyx_r);
__Pyx_RefNannyFinishContext();
return __pyx_r;
} | 903 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_f_17clickhouse_driver_14bufferedwriter___pyx_unpickle_CompressedBufferedWriter__set_state | __pyx_f_17clickhouse_driver_14bufferedwriter___pyx_unpickle_CompressedBufferedWriter__set_state( struct __pyx_obj_17clickhouse_driver_14bufferedwriter_CompressedBufferedWriter * __pyx_v___pyx_result , PyObject * __pyx_v___pyx_state) | ['__pyx_v___pyx_result', '__pyx_v___pyx_state'] | static PyObject *__pyx_f_17clickhouse_driver_14bufferedwriter___pyx_unpickle_CompressedBufferedWriter__set_state(struct __pyx_obj_17clickhouse_driver_14bufferedwriter_CompressedBufferedWriter *__pyx_v___pyx_result, PyObject *__pyx_v___pyx_state) {
PyObject *__pyx_r = NULL;
__Pyx_RefNannyDeclarations
PyObject *__pyx_t_1 = NULL;
char *__pyx_t_2;
Py_ssize_t __pyx_t_3;
int __pyx_t_4;
int __pyx_t_5;
int __pyx_t_6;
PyObject *__pyx_t_7 = NULL;
PyObject *__pyx_t_8 = NULL;
PyObject *__pyx_t_9 = NULL;
__Pyx_RefNannySetupContext("__pyx_unpickle_CompressedBufferedWriter__set_state", 0);
/* "(tree fragment)":12
* return __pyx_result
* cdef __pyx_unpickle_CompressedBufferedWriter__set_state(CompressedBufferedWriter __pyx_result, tuple __pyx_state):
* __pyx_result.buffer = __pyx_state[0]; __pyx_result.buffer_size = __pyx_state[1]; __pyx_result.compressor = __pyx_state[2]; __pyx_result.position = __pyx_state[3] # <<<<<<<<<<<<<<
* if len(__pyx_state) > 4 and hasattr(__pyx_result, '__dict__'):
* __pyx_result.__dict__.update(__pyx_state[4])
*/
if (unlikely(__pyx_v___pyx_state == Py_None)) {
PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable");
__PYX_ERR(1, 12, __pyx_L1_error)
}
__pyx_t_1 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__pyx_t_2 = __Pyx_PyObject_AsWritableString(__pyx_t_1); if (unlikely((!__pyx_t_2) && PyErr_Occurred())) __PYX_ERR(1, 12, __pyx_L1_error)
__pyx_v___pyx_result->__pyx_base.buffer = __pyx_t_2;
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
if (unlikely(__pyx_v___pyx_state == Py_None)) {
PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable");
__PYX_ERR(1, 12, __pyx_L1_error)
}
__pyx_t_1 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__pyx_t_3 = __Pyx_PyIndex_AsSsize_t(__pyx_t_1); if (unlikely((__pyx_t_3 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
__pyx_v___pyx_result->__pyx_base.buffer_size = __pyx_t_3;
if (unlikely(__pyx_v___pyx_state == Py_None)) {
PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable");
__PYX_ERR(1, 12, __pyx_L1_error)
}
__pyx_t_1 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 2, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__Pyx_GIVEREF(__pyx_t_1);
__Pyx_GOTREF(__pyx_v___pyx_result->compressor);
__Pyx_DECREF(__pyx_v___pyx_result->compressor);
__pyx_v___pyx_result->compressor = __pyx_t_1;
__pyx_t_1 = 0;
if (unlikely(__pyx_v___pyx_state == Py_None)) {
PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable");
__PYX_ERR(1, 12, __pyx_L1_error)
}
__pyx_t_1 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 3, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__pyx_t_3 = __Pyx_PyIndex_AsSsize_t(__pyx_t_1); if (unlikely((__pyx_t_3 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 12, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
__pyx_v___pyx_result->__pyx_base.position = __pyx_t_3;
/* "(tree fragment)":13
* cdef __pyx_unpickle_CompressedBufferedWriter__set_state(CompressedBufferedWriter __pyx_result, tuple __pyx_state):
* __pyx_result.buffer = __pyx_state[0]; __pyx_result.buffer_size = __pyx_state[1]; __pyx_result.compressor = __pyx_state[2]; __pyx_result.position = __pyx_state[3]
* if len(__pyx_state) > 4 and hasattr(__pyx_result, '__dict__'): # <<<<<<<<<<<<<<
* __pyx_result.__dict__.update(__pyx_state[4])
*/
if (unlikely(__pyx_v___pyx_state == Py_None)) {
PyErr_SetString(PyExc_TypeError, "object of type 'NoneType' has no len()");
__PYX_ERR(1, 13, __pyx_L1_error)
}
__pyx_t_3 = PyTuple_GET_SIZE(__pyx_v___pyx_state); if (unlikely(__pyx_t_3 == ((Py_ssize_t)-1))) __PYX_ERR(1, 13, __pyx_L1_error)
__pyx_t_5 = ((__pyx_t_3 > 4) != 0);
if (__pyx_t_5) {
} else {
__pyx_t_4 = __pyx_t_5;
goto __pyx_L4_bool_binop_done;
}
__pyx_t_5 = __Pyx_HasAttr(((PyObject *)__pyx_v___pyx_result), __pyx_n_s_dict); if (unlikely(__pyx_t_5 == ((int)-1))) __PYX_ERR(1, 13, __pyx_L1_error)
__pyx_t_6 = (__pyx_t_5 != 0);
__pyx_t_4 = __pyx_t_6;
__pyx_L4_bool_binop_done:;
if (__pyx_t_4) {
/* "(tree fragment)":14
* __pyx_result.buffer = __pyx_state[0]; __pyx_result.buffer_size = __pyx_state[1]; __pyx_result.compressor = __pyx_state[2]; __pyx_result.position = __pyx_state[3]
* if len(__pyx_state) > 4 and hasattr(__pyx_result, '__dict__'):
* __pyx_result.__dict__.update(__pyx_state[4]) # <<<<<<<<<<<<<<
*/
__pyx_t_7 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v___pyx_result), __pyx_n_s_dict); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 14, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_7);
__pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_update); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 14, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_8);
__Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
if (unlikely(__pyx_v___pyx_state == Py_None)) {
PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable");
__PYX_ERR(1, 14, __pyx_L1_error)
}
__pyx_t_7 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 4, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 14, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_7);
__pyx_t_9 = NULL;
if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_8))) {
__pyx_t_9 = PyMethod_GET_SELF(__pyx_t_8);
if (likely(__pyx_t_9)) {
PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_8);
__Pyx_INCREF(__pyx_t_9);
__Pyx_INCREF(function);
__Pyx_DECREF_SET(__pyx_t_8, function);
}
}
__pyx_t_1 = (__pyx_t_9) ? __Pyx_PyObject_Call2Args(__pyx_t_8, __pyx_t_9, __pyx_t_7) : __Pyx_PyObject_CallOneArg(__pyx_t_8, __pyx_t_7);
__Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0;
__Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 14, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
/* "(tree fragment)":13
* cdef __pyx_unpickle_CompressedBufferedWriter__set_state(CompressedBufferedWriter __pyx_result, tuple __pyx_state):
* __pyx_result.buffer = __pyx_state[0]; __pyx_result.buffer_size = __pyx_state[1]; __pyx_result.compressor = __pyx_state[2]; __pyx_result.position = __pyx_state[3]
* if len(__pyx_state) > 4 and hasattr(__pyx_result, '__dict__'): # <<<<<<<<<<<<<<
* __pyx_result.__dict__.update(__pyx_state[4])
*/
}
/* "(tree fragment)":11
* __pyx_unpickle_CompressedBufferedWriter__set_state(<CompressedBufferedWriter> __pyx_result, __pyx_state)
* return __pyx_result
* cdef __pyx_unpickle_CompressedBufferedWriter__set_state(CompressedBufferedWriter __pyx_result, tuple __pyx_state): # <<<<<<<<<<<<<<
* __pyx_result.buffer = __pyx_state[0]; __pyx_result.buffer_size = __pyx_state[1]; __pyx_result.compressor = __pyx_state[2]; __pyx_result.position = __pyx_state[3]
* if len(__pyx_state) > 4 and hasattr(__pyx_result, '__dict__'):
*/
/* function exit code */
__pyx_r = Py_None; __Pyx_INCREF(Py_None);
goto __pyx_L0;
__pyx_L1_error:;
__Pyx_XDECREF(__pyx_t_1);
__Pyx_XDECREF(__pyx_t_7);
__Pyx_XDECREF(__pyx_t_8);
__Pyx_XDECREF(__pyx_t_9);
__Pyx_AddTraceback("clickhouse_driver.bufferedwriter.__pyx_unpickle_CompressedBufferedWriter__set_state", __pyx_clineno, __pyx_lineno, __pyx_filename);
__pyx_r = 0;
__pyx_L0:;
__Pyx_XGIVEREF(__pyx_r);
__Pyx_RefNannyFinishContext();
return __pyx_r;
} | 1006 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_pf_17clickhouse_driver_14bufferedwriter_14BufferedWriter_12__reduce_cython__ | __pyx_pf_17clickhouse_driver_14bufferedwriter_14BufferedWriter_12__reduce_cython__( struct __pyx_obj_17clickhouse_driver_14bufferedwriter_BufferedWriter * __pyx_v_self) | ['__pyx_v_self'] | static PyObject *__pyx_pf_17clickhouse_driver_14bufferedwriter_14BufferedWriter_12__reduce_cython__(struct __pyx_obj_17clickhouse_driver_14bufferedwriter_BufferedWriter *__pyx_v_self) {
PyObject *__pyx_v_state = 0;
PyObject *__pyx_v__dict = 0;
int __pyx_v_use_setstate;
PyObject *__pyx_r = NULL;
__Pyx_RefNannyDeclarations
PyObject *__pyx_t_1 = NULL;
PyObject *__pyx_t_2 = NULL;
PyObject *__pyx_t_3 = NULL;
PyObject *__pyx_t_4 = NULL;
int __pyx_t_5;
int __pyx_t_6;
__Pyx_RefNannySetupContext("__reduce_cython__", 0);
/* "(tree fragment)":5
* cdef object _dict
* cdef bint use_setstate
* state = (self.buffer, self.buffer_size, self.position) # <<<<<<<<<<<<<<
* _dict = getattr(self, '__dict__', None)
* if _dict is not None:
*/
__pyx_t_1 = __Pyx_PyBytes_FromString(__pyx_v_self->buffer); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__pyx_t_2 = PyInt_FromSsize_t(__pyx_v_self->buffer_size); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__pyx_t_3 = PyInt_FromSsize_t(__pyx_v_self->position); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__pyx_t_4 = PyTuple_New(3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
__Pyx_GIVEREF(__pyx_t_1);
PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_1);
__Pyx_GIVEREF(__pyx_t_2);
PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_2);
__Pyx_GIVEREF(__pyx_t_3);
PyTuple_SET_ITEM(__pyx_t_4, 2, __pyx_t_3);
__pyx_t_1 = 0;
__pyx_t_2 = 0;
__pyx_t_3 = 0;
__pyx_v_state = ((PyObject*)__pyx_t_4);
__pyx_t_4 = 0;
/* "(tree fragment)":6
* cdef bint use_setstate
* state = (self.buffer, self.buffer_size, self.position)
* _dict = getattr(self, '__dict__', None) # <<<<<<<<<<<<<<
* if _dict is not None:
* state += (_dict,)
*/
__pyx_t_4 = __Pyx_GetAttr3(((PyObject *)__pyx_v_self), __pyx_n_s_dict, Py_None); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 6, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
__pyx_v__dict = __pyx_t_4;
__pyx_t_4 = 0;
/* "(tree fragment)":7
* state = (self.buffer, self.buffer_size, self.position)
* _dict = getattr(self, '__dict__', None)
* if _dict is not None: # <<<<<<<<<<<<<<
* state += (_dict,)
* use_setstate = True
*/
__pyx_t_5 = (__pyx_v__dict != Py_None);
__pyx_t_6 = (__pyx_t_5 != 0);
if (__pyx_t_6) {
/* "(tree fragment)":8
* _dict = getattr(self, '__dict__', None)
* if _dict is not None:
* state += (_dict,) # <<<<<<<<<<<<<<
* use_setstate = True
* else:
*/
__pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 8, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
__Pyx_INCREF(__pyx_v__dict);
__Pyx_GIVEREF(__pyx_v__dict);
PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_v__dict);
__pyx_t_3 = PyNumber_InPlaceAdd(__pyx_v_state, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 8, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
__Pyx_DECREF_SET(__pyx_v_state, ((PyObject*)__pyx_t_3));
__pyx_t_3 = 0;
/* "(tree fragment)":9
* if _dict is not None:
* state += (_dict,)
* use_setstate = True # <<<<<<<<<<<<<<
* else:
* use_setstate = False
*/
__pyx_v_use_setstate = 1;
/* "(tree fragment)":7
* state = (self.buffer, self.buffer_size, self.position)
* _dict = getattr(self, '__dict__', None)
* if _dict is not None: # <<<<<<<<<<<<<<
* state += (_dict,)
* use_setstate = True
*/
goto __pyx_L3;
}
/* "(tree fragment)":11
* use_setstate = True
* else:
* use_setstate = False # <<<<<<<<<<<<<<
* if use_setstate:
* return __pyx_unpickle_BufferedWriter, (type(self), 0x25d1d0c, None), state
*/
/*else*/ {
__pyx_v_use_setstate = 0;
}
__pyx_L3:;
/* "(tree fragment)":12
* else:
* use_setstate = False
* if use_setstate: # <<<<<<<<<<<<<<
* return __pyx_unpickle_BufferedWriter, (type(self), 0x25d1d0c, None), state
* else:
*/
__pyx_t_6 = (__pyx_v_use_setstate != 0);
if (__pyx_t_6) {
/* "(tree fragment)":13
* use_setstate = False
* if use_setstate:
* return __pyx_unpickle_BufferedWriter, (type(self), 0x25d1d0c, None), state # <<<<<<<<<<<<<<
* else:
* return __pyx_unpickle_BufferedWriter, (type(self), 0x25d1d0c, state)
*/
__Pyx_XDECREF(__pyx_r);
__Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_pyx_unpickle_BufferedWriter); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 13, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__pyx_t_4 = PyTuple_New(3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 13, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
__Pyx_INCREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
__Pyx_GIVEREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
PyTuple_SET_ITEM(__pyx_t_4, 0, ((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
__Pyx_INCREF(__pyx_int_39656716);
__Pyx_GIVEREF(__pyx_int_39656716);
PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_int_39656716);
__Pyx_INCREF(Py_None);
__Pyx_GIVEREF(Py_None);
PyTuple_SET_ITEM(__pyx_t_4, 2, Py_None);
__pyx_t_2 = PyTuple_New(3); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 13, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__Pyx_GIVEREF(__pyx_t_3);
PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_3);
__Pyx_GIVEREF(__pyx_t_4);
PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_4);
__Pyx_INCREF(__pyx_v_state);
__Pyx_GIVEREF(__pyx_v_state);
PyTuple_SET_ITEM(__pyx_t_2, 2, __pyx_v_state);
__pyx_t_3 = 0;
__pyx_t_4 = 0;
__pyx_r = __pyx_t_2;
__pyx_t_2 = 0;
goto __pyx_L0;
/* "(tree fragment)":12
* else:
* use_setstate = False
* if use_setstate: # <<<<<<<<<<<<<<
* return __pyx_unpickle_BufferedWriter, (type(self), 0x25d1d0c, None), state
* else:
*/
}
/* "(tree fragment)":15
* return __pyx_unpickle_BufferedWriter, (type(self), 0x25d1d0c, None), state
* else:
* return __pyx_unpickle_BufferedWriter, (type(self), 0x25d1d0c, state) # <<<<<<<<<<<<<<
* def __setstate_cython__(self, __pyx_state):
* __pyx_unpickle_BufferedWriter__set_state(self, __pyx_state)
*/
/*else*/ {
__Pyx_XDECREF(__pyx_r);
__Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_pyx_unpickle_BufferedWriter); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 15, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__pyx_t_4 = PyTuple_New(3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 15, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
__Pyx_INCREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
__Pyx_GIVEREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
PyTuple_SET_ITEM(__pyx_t_4, 0, ((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
__Pyx_INCREF(__pyx_int_39656716);
__Pyx_GIVEREF(__pyx_int_39656716);
PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_int_39656716);
__Pyx_INCREF(__pyx_v_state);
__Pyx_GIVEREF(__pyx_v_state);
PyTuple_SET_ITEM(__pyx_t_4, 2, __pyx_v_state);
__pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 15, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_GIVEREF(__pyx_t_2);
PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_2);
__Pyx_GIVEREF(__pyx_t_4);
PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_4);
__pyx_t_2 = 0;
__pyx_t_4 = 0;
__pyx_r = __pyx_t_3;
__pyx_t_3 = 0;
goto __pyx_L0;
}
/* "(tree fragment)":1
* def __reduce_cython__(self): # <<<<<<<<<<<<<<
* cdef tuple state
* cdef object _dict
*/
/* function exit code */
__pyx_L1_error:;
__Pyx_XDECREF(__pyx_t_1);
__Pyx_XDECREF(__pyx_t_2);
__Pyx_XDECREF(__pyx_t_3);
__Pyx_XDECREF(__pyx_t_4);
__Pyx_AddTraceback("clickhouse_driver.bufferedwriter.BufferedWriter.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
__pyx_r = NULL;
__pyx_L0:;
__Pyx_XDECREF(__pyx_v_state);
__Pyx_XDECREF(__pyx_v__dict);
__Pyx_XGIVEREF(__pyx_r);
__Pyx_RefNannyFinishContext();
return __pyx_r;
} | 1015 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_pf_17clickhouse_driver_14bufferedwriter_14BufferedWriter___init__ | __pyx_pf_17clickhouse_driver_14bufferedwriter_14BufferedWriter___init__( struct __pyx_obj_17clickhouse_driver_14bufferedwriter_BufferedWriter * __pyx_v_self , Py_ssize_t __pyx_v_bufsize) | ['__pyx_v_self', '__pyx_v_bufsize'] | static int __pyx_pf_17clickhouse_driver_14bufferedwriter_14BufferedWriter___init__(struct __pyx_obj_17clickhouse_driver_14bufferedwriter_BufferedWriter *__pyx_v_self, Py_ssize_t __pyx_v_bufsize) {
int __pyx_r;
__Pyx_RefNannyDeclarations
int __pyx_t_1;
PyObject *__pyx_t_2 = NULL;
PyObject *__pyx_t_3 = NULL;
PyObject *__pyx_t_4 = NULL;
__Pyx_RefNannySetupContext("__init__", 0);
/* "clickhouse_driver/bufferedwriter.pyx":13
*
* def __init__(self, Py_ssize_t bufsize):
* self.buffer = <char *> PyMem_Malloc(bufsize) # <<<<<<<<<<<<<<
* if not self.buffer:
* raise MemoryError()
*/
__pyx_v_self->buffer = ((char *)PyMem_Malloc(__pyx_v_bufsize));
/* "clickhouse_driver/bufferedwriter.pyx":14
* def __init__(self, Py_ssize_t bufsize):
* self.buffer = <char *> PyMem_Malloc(bufsize)
* if not self.buffer: # <<<<<<<<<<<<<<
* raise MemoryError()
*
*/
__pyx_t_1 = ((!(__pyx_v_self->buffer != 0)) != 0);
if (unlikely(__pyx_t_1)) {
/* "clickhouse_driver/bufferedwriter.pyx":15
* self.buffer = <char *> PyMem_Malloc(bufsize)
* if not self.buffer:
* raise MemoryError() # <<<<<<<<<<<<<<
*
* self.position = 0
*/
PyErr_NoMemory(); __PYX_ERR(0, 15, __pyx_L1_error)
/* "clickhouse_driver/bufferedwriter.pyx":14
* def __init__(self, Py_ssize_t bufsize):
* self.buffer = <char *> PyMem_Malloc(bufsize)
* if not self.buffer: # <<<<<<<<<<<<<<
* raise MemoryError()
*
*/
}
/* "clickhouse_driver/bufferedwriter.pyx":17
* raise MemoryError()
*
* self.position = 0 # <<<<<<<<<<<<<<
* self.buffer_size = bufsize
*
*/
__pyx_v_self->position = 0;
/* "clickhouse_driver/bufferedwriter.pyx":18
*
* self.position = 0
* self.buffer_size = bufsize # <<<<<<<<<<<<<<
*
* super(BufferedWriter, self).__init__()
*/
__pyx_v_self->buffer_size = __pyx_v_bufsize;
/* "clickhouse_driver/bufferedwriter.pyx":20
* self.buffer_size = bufsize
*
* super(BufferedWriter, self).__init__() # <<<<<<<<<<<<<<
*
* def __dealloc__(self):
*/
__pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 20, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_INCREF(((PyObject *)__pyx_ptype_17clickhouse_driver_14bufferedwriter_BufferedWriter));
__Pyx_GIVEREF(((PyObject *)__pyx_ptype_17clickhouse_driver_14bufferedwriter_BufferedWriter));
PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_ptype_17clickhouse_driver_14bufferedwriter_BufferedWriter));
__Pyx_INCREF(((PyObject *)__pyx_v_self));
__Pyx_GIVEREF(((PyObject *)__pyx_v_self));
PyTuple_SET_ITEM(__pyx_t_3, 1, ((PyObject *)__pyx_v_self));
__pyx_t_4 = __Pyx_PyObject_Call(__pyx_builtin_super, __pyx_t_3, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 20, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
__Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
__pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_init); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 20, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
__pyx_t_4 = NULL;
if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_3))) {
__pyx_t_4 = PyMethod_GET_SELF(__pyx_t_3);
if (likely(__pyx_t_4)) {
PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);
__Pyx_INCREF(__pyx_t_4);
__Pyx_INCREF(function);
__Pyx_DECREF_SET(__pyx_t_3, function);
}
}
__pyx_t_2 = (__pyx_t_4) ? __Pyx_PyObject_CallOneArg(__pyx_t_3, __pyx_t_4) : __Pyx_PyObject_CallNoArg(__pyx_t_3);
__Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 20, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
/* "clickhouse_driver/bufferedwriter.pyx":12
* cdef Py_ssize_t position, buffer_size
*
* def __init__(self, Py_ssize_t bufsize): # <<<<<<<<<<<<<<
* self.buffer = <char *> PyMem_Malloc(bufsize)
* if not self.buffer:
*/
/* function exit code */
__pyx_r = 0;
goto __pyx_L0;
__pyx_L1_error:;
__Pyx_XDECREF(__pyx_t_2);
__Pyx_XDECREF(__pyx_t_3);
__Pyx_XDECREF(__pyx_t_4);
__Pyx_AddTraceback("clickhouse_driver.bufferedwriter.BufferedWriter.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename);
__pyx_r = -1;
__pyx_L0:;
__Pyx_RefNannyFinishContext();
return __pyx_r;
} | 468 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_pf_17clickhouse_driver_14bufferedwriter_20BufferedSocketWriter_4__reduce_cython__ | __pyx_pf_17clickhouse_driver_14bufferedwriter_20BufferedSocketWriter_4__reduce_cython__( struct __pyx_obj_17clickhouse_driver_14bufferedwriter_BufferedSocketWriter * __pyx_v_self) | ['__pyx_v_self'] | static PyObject *__pyx_pf_17clickhouse_driver_14bufferedwriter_20BufferedSocketWriter_4__reduce_cython__(struct __pyx_obj_17clickhouse_driver_14bufferedwriter_BufferedSocketWriter *__pyx_v_self) {
PyObject *__pyx_v_state = 0;
PyObject *__pyx_v__dict = 0;
int __pyx_v_use_setstate;
PyObject *__pyx_r = NULL;
__Pyx_RefNannyDeclarations
PyObject *__pyx_t_1 = NULL;
PyObject *__pyx_t_2 = NULL;
PyObject *__pyx_t_3 = NULL;
PyObject *__pyx_t_4 = NULL;
int __pyx_t_5;
int __pyx_t_6;
__Pyx_RefNannySetupContext("__reduce_cython__", 0);
/* "(tree fragment)":5
* cdef object _dict
* cdef bint use_setstate
* state = (self.buffer, self.buffer_size, self.position, self.sock) # <<<<<<<<<<<<<<
* _dict = getattr(self, '__dict__', None)
* if _dict is not None:
*/
__pyx_t_1 = __Pyx_PyBytes_FromString(__pyx_v_self->__pyx_base.buffer); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__pyx_t_2 = PyInt_FromSsize_t(__pyx_v_self->__pyx_base.buffer_size); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__pyx_t_3 = PyInt_FromSsize_t(__pyx_v_self->__pyx_base.position); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__pyx_t_4 = PyTuple_New(4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
__Pyx_GIVEREF(__pyx_t_1);
PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_1);
__Pyx_GIVEREF(__pyx_t_2);
PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_2);
__Pyx_GIVEREF(__pyx_t_3);
PyTuple_SET_ITEM(__pyx_t_4, 2, __pyx_t_3);
__Pyx_INCREF(__pyx_v_self->sock);
__Pyx_GIVEREF(__pyx_v_self->sock);
PyTuple_SET_ITEM(__pyx_t_4, 3, __pyx_v_self->sock);
__pyx_t_1 = 0;
__pyx_t_2 = 0;
__pyx_t_3 = 0;
__pyx_v_state = ((PyObject*)__pyx_t_4);
__pyx_t_4 = 0;
/* "(tree fragment)":6
* cdef bint use_setstate
* state = (self.buffer, self.buffer_size, self.position, self.sock)
* _dict = getattr(self, '__dict__', None) # <<<<<<<<<<<<<<
* if _dict is not None:
* state += (_dict,)
*/
__pyx_t_4 = __Pyx_GetAttr3(((PyObject *)__pyx_v_self), __pyx_n_s_dict, Py_None); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 6, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
__pyx_v__dict = __pyx_t_4;
__pyx_t_4 = 0;
/* "(tree fragment)":7
* state = (self.buffer, self.buffer_size, self.position, self.sock)
* _dict = getattr(self, '__dict__', None)
* if _dict is not None: # <<<<<<<<<<<<<<
* state += (_dict,)
* use_setstate = True
*/
__pyx_t_5 = (__pyx_v__dict != Py_None);
__pyx_t_6 = (__pyx_t_5 != 0);
if (__pyx_t_6) {
/* "(tree fragment)":8
* _dict = getattr(self, '__dict__', None)
* if _dict is not None:
* state += (_dict,) # <<<<<<<<<<<<<<
* use_setstate = True
* else:
*/
__pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 8, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
__Pyx_INCREF(__pyx_v__dict);
__Pyx_GIVEREF(__pyx_v__dict);
PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_v__dict);
__pyx_t_3 = PyNumber_InPlaceAdd(__pyx_v_state, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 8, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
__Pyx_DECREF_SET(__pyx_v_state, ((PyObject*)__pyx_t_3));
__pyx_t_3 = 0;
/* "(tree fragment)":9
* if _dict is not None:
* state += (_dict,)
* use_setstate = True # <<<<<<<<<<<<<<
* else:
* use_setstate = self.sock is not None
*/
__pyx_v_use_setstate = 1;
/* "(tree fragment)":7
* state = (self.buffer, self.buffer_size, self.position, self.sock)
* _dict = getattr(self, '__dict__', None)
* if _dict is not None: # <<<<<<<<<<<<<<
* state += (_dict,)
* use_setstate = True
*/
goto __pyx_L3;
}
/* "(tree fragment)":11
* use_setstate = True
* else:
* use_setstate = self.sock is not None # <<<<<<<<<<<<<<
* if use_setstate:
* return __pyx_unpickle_BufferedSocketWriter, (type(self), 0x3baf4af, None), state
*/
/*else*/ {
__pyx_t_6 = (__pyx_v_self->sock != Py_None);
__pyx_v_use_setstate = __pyx_t_6;
}
__pyx_L3:;
/* "(tree fragment)":12
* else:
* use_setstate = self.sock is not None
* if use_setstate: # <<<<<<<<<<<<<<
* return __pyx_unpickle_BufferedSocketWriter, (type(self), 0x3baf4af, None), state
* else:
*/
__pyx_t_6 = (__pyx_v_use_setstate != 0);
if (__pyx_t_6) {
/* "(tree fragment)":13
* use_setstate = self.sock is not None
* if use_setstate:
* return __pyx_unpickle_BufferedSocketWriter, (type(self), 0x3baf4af, None), state # <<<<<<<<<<<<<<
* else:
* return __pyx_unpickle_BufferedSocketWriter, (type(self), 0x3baf4af, state)
*/
__Pyx_XDECREF(__pyx_r);
__Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_pyx_unpickle_BufferedSocketWri); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 13, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__pyx_t_4 = PyTuple_New(3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 13, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
__Pyx_INCREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
__Pyx_GIVEREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
PyTuple_SET_ITEM(__pyx_t_4, 0, ((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
__Pyx_INCREF(__pyx_int_62583983);
__Pyx_GIVEREF(__pyx_int_62583983);
PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_int_62583983);
__Pyx_INCREF(Py_None);
__Pyx_GIVEREF(Py_None);
PyTuple_SET_ITEM(__pyx_t_4, 2, Py_None);
__pyx_t_2 = PyTuple_New(3); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 13, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__Pyx_GIVEREF(__pyx_t_3);
PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_3);
__Pyx_GIVEREF(__pyx_t_4);
PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_4);
__Pyx_INCREF(__pyx_v_state);
__Pyx_GIVEREF(__pyx_v_state);
PyTuple_SET_ITEM(__pyx_t_2, 2, __pyx_v_state);
__pyx_t_3 = 0;
__pyx_t_4 = 0;
__pyx_r = __pyx_t_2;
__pyx_t_2 = 0;
goto __pyx_L0;
/* "(tree fragment)":12
* else:
* use_setstate = self.sock is not None
* if use_setstate: # <<<<<<<<<<<<<<
* return __pyx_unpickle_BufferedSocketWriter, (type(self), 0x3baf4af, None), state
* else:
*/
}
/* "(tree fragment)":15
* return __pyx_unpickle_BufferedSocketWriter, (type(self), 0x3baf4af, None), state
* else:
* return __pyx_unpickle_BufferedSocketWriter, (type(self), 0x3baf4af, state) # <<<<<<<<<<<<<<
* def __setstate_cython__(self, __pyx_state):
* __pyx_unpickle_BufferedSocketWriter__set_state(self, __pyx_state)
*/
/*else*/ {
__Pyx_XDECREF(__pyx_r);
__Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_pyx_unpickle_BufferedSocketWri); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 15, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__pyx_t_4 = PyTuple_New(3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 15, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
__Pyx_INCREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
__Pyx_GIVEREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
PyTuple_SET_ITEM(__pyx_t_4, 0, ((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
__Pyx_INCREF(__pyx_int_62583983);
__Pyx_GIVEREF(__pyx_int_62583983);
PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_int_62583983);
__Pyx_INCREF(__pyx_v_state);
__Pyx_GIVEREF(__pyx_v_state);
PyTuple_SET_ITEM(__pyx_t_4, 2, __pyx_v_state);
__pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 15, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_GIVEREF(__pyx_t_2);
PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_2);
__Pyx_GIVEREF(__pyx_t_4);
PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_4);
__pyx_t_2 = 0;
__pyx_t_4 = 0;
__pyx_r = __pyx_t_3;
__pyx_t_3 = 0;
goto __pyx_L0;
}
/* "(tree fragment)":1
* def __reduce_cython__(self): # <<<<<<<<<<<<<<
* cdef tuple state
* cdef object _dict
*/
/* function exit code */
__pyx_L1_error:;
__Pyx_XDECREF(__pyx_t_1);
__Pyx_XDECREF(__pyx_t_2);
__Pyx_XDECREF(__pyx_t_3);
__Pyx_XDECREF(__pyx_t_4);
__Pyx_AddTraceback("clickhouse_driver.bufferedwriter.BufferedSocketWriter.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
__pyx_r = NULL;
__pyx_L0:;
__Pyx_XDECREF(__pyx_v_state);
__Pyx_XDECREF(__pyx_v__dict);
__Pyx_XGIVEREF(__pyx_r);
__Pyx_RefNannyFinishContext();
return __pyx_r;
} | 1056 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_pf_17clickhouse_driver_14bufferedwriter_24CompressedBufferedWriter_6__reduce_cython__ | __pyx_pf_17clickhouse_driver_14bufferedwriter_24CompressedBufferedWriter_6__reduce_cython__( struct __pyx_obj_17clickhouse_driver_14bufferedwriter_CompressedBufferedWriter * __pyx_v_self) | ['__pyx_v_self'] | static PyObject *__pyx_pf_17clickhouse_driver_14bufferedwriter_24CompressedBufferedWriter_6__reduce_cython__(struct __pyx_obj_17clickhouse_driver_14bufferedwriter_CompressedBufferedWriter *__pyx_v_self) {
PyObject *__pyx_v_state = 0;
PyObject *__pyx_v__dict = 0;
int __pyx_v_use_setstate;
PyObject *__pyx_r = NULL;
__Pyx_RefNannyDeclarations
PyObject *__pyx_t_1 = NULL;
PyObject *__pyx_t_2 = NULL;
PyObject *__pyx_t_3 = NULL;
PyObject *__pyx_t_4 = NULL;
int __pyx_t_5;
int __pyx_t_6;
__Pyx_RefNannySetupContext("__reduce_cython__", 0);
/* "(tree fragment)":5
* cdef object _dict
* cdef bint use_setstate
* state = (self.buffer, self.buffer_size, self.compressor, self.position) # <<<<<<<<<<<<<<
* _dict = getattr(self, '__dict__', None)
* if _dict is not None:
*/
__pyx_t_1 = __Pyx_PyBytes_FromString(__pyx_v_self->__pyx_base.buffer); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__pyx_t_2 = PyInt_FromSsize_t(__pyx_v_self->__pyx_base.buffer_size); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__pyx_t_3 = PyInt_FromSsize_t(__pyx_v_self->__pyx_base.position); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__pyx_t_4 = PyTuple_New(4); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
__Pyx_GIVEREF(__pyx_t_1);
PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_1);
__Pyx_GIVEREF(__pyx_t_2);
PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_2);
__Pyx_INCREF(__pyx_v_self->compressor);
__Pyx_GIVEREF(__pyx_v_self->compressor);
PyTuple_SET_ITEM(__pyx_t_4, 2, __pyx_v_self->compressor);
__Pyx_GIVEREF(__pyx_t_3);
PyTuple_SET_ITEM(__pyx_t_4, 3, __pyx_t_3);
__pyx_t_1 = 0;
__pyx_t_2 = 0;
__pyx_t_3 = 0;
__pyx_v_state = ((PyObject*)__pyx_t_4);
__pyx_t_4 = 0;
/* "(tree fragment)":6
* cdef bint use_setstate
* state = (self.buffer, self.buffer_size, self.compressor, self.position)
* _dict = getattr(self, '__dict__', None) # <<<<<<<<<<<<<<
* if _dict is not None:
* state += (_dict,)
*/
__pyx_t_4 = __Pyx_GetAttr3(((PyObject *)__pyx_v_self), __pyx_n_s_dict, Py_None); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 6, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
__pyx_v__dict = __pyx_t_4;
__pyx_t_4 = 0;
/* "(tree fragment)":7
* state = (self.buffer, self.buffer_size, self.compressor, self.position)
* _dict = getattr(self, '__dict__', None)
* if _dict is not None: # <<<<<<<<<<<<<<
* state += (_dict,)
* use_setstate = True
*/
__pyx_t_5 = (__pyx_v__dict != Py_None);
__pyx_t_6 = (__pyx_t_5 != 0);
if (__pyx_t_6) {
/* "(tree fragment)":8
* _dict = getattr(self, '__dict__', None)
* if _dict is not None:
* state += (_dict,) # <<<<<<<<<<<<<<
* use_setstate = True
* else:
*/
__pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 8, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
__Pyx_INCREF(__pyx_v__dict);
__Pyx_GIVEREF(__pyx_v__dict);
PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_v__dict);
__pyx_t_3 = PyNumber_InPlaceAdd(__pyx_v_state, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 8, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
__Pyx_DECREF_SET(__pyx_v_state, ((PyObject*)__pyx_t_3));
__pyx_t_3 = 0;
/* "(tree fragment)":9
* if _dict is not None:
* state += (_dict,)
* use_setstate = True # <<<<<<<<<<<<<<
* else:
* use_setstate = self.compressor is not None
*/
__pyx_v_use_setstate = 1;
/* "(tree fragment)":7
* state = (self.buffer, self.buffer_size, self.compressor, self.position)
* _dict = getattr(self, '__dict__', None)
* if _dict is not None: # <<<<<<<<<<<<<<
* state += (_dict,)
* use_setstate = True
*/
goto __pyx_L3;
}
/* "(tree fragment)":11
* use_setstate = True
* else:
* use_setstate = self.compressor is not None # <<<<<<<<<<<<<<
* if use_setstate:
* return __pyx_unpickle_CompressedBufferedWriter, (type(self), 0x108d208, None), state
*/
/*else*/ {
__pyx_t_6 = (__pyx_v_self->compressor != Py_None);
__pyx_v_use_setstate = __pyx_t_6;
}
__pyx_L3:;
/* "(tree fragment)":12
* else:
* use_setstate = self.compressor is not None
* if use_setstate: # <<<<<<<<<<<<<<
* return __pyx_unpickle_CompressedBufferedWriter, (type(self), 0x108d208, None), state
* else:
*/
__pyx_t_6 = (__pyx_v_use_setstate != 0);
if (__pyx_t_6) {
/* "(tree fragment)":13
* use_setstate = self.compressor is not None
* if use_setstate:
* return __pyx_unpickle_CompressedBufferedWriter, (type(self), 0x108d208, None), state # <<<<<<<<<<<<<<
* else:
* return __pyx_unpickle_CompressedBufferedWriter, (type(self), 0x108d208, state)
*/
__Pyx_XDECREF(__pyx_r);
__Pyx_GetModuleGlobalName(__pyx_t_3, __pyx_n_s_pyx_unpickle_CompressedBuffere); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 13, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__pyx_t_4 = PyTuple_New(3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 13, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
__Pyx_INCREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
__Pyx_GIVEREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
PyTuple_SET_ITEM(__pyx_t_4, 0, ((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
__Pyx_INCREF(__pyx_int_17355272);
__Pyx_GIVEREF(__pyx_int_17355272);
PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_int_17355272);
__Pyx_INCREF(Py_None);
__Pyx_GIVEREF(Py_None);
PyTuple_SET_ITEM(__pyx_t_4, 2, Py_None);
__pyx_t_2 = PyTuple_New(3); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 13, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__Pyx_GIVEREF(__pyx_t_3);
PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_3);
__Pyx_GIVEREF(__pyx_t_4);
PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_4);
__Pyx_INCREF(__pyx_v_state);
__Pyx_GIVEREF(__pyx_v_state);
PyTuple_SET_ITEM(__pyx_t_2, 2, __pyx_v_state);
__pyx_t_3 = 0;
__pyx_t_4 = 0;
__pyx_r = __pyx_t_2;
__pyx_t_2 = 0;
goto __pyx_L0;
/* "(tree fragment)":12
* else:
* use_setstate = self.compressor is not None
* if use_setstate: # <<<<<<<<<<<<<<
* return __pyx_unpickle_CompressedBufferedWriter, (type(self), 0x108d208, None), state
* else:
*/
}
/* "(tree fragment)":15
* return __pyx_unpickle_CompressedBufferedWriter, (type(self), 0x108d208, None), state
* else:
* return __pyx_unpickle_CompressedBufferedWriter, (type(self), 0x108d208, state) # <<<<<<<<<<<<<<
* def __setstate_cython__(self, __pyx_state):
* __pyx_unpickle_CompressedBufferedWriter__set_state(self, __pyx_state)
*/
/*else*/ {
__Pyx_XDECREF(__pyx_r);
__Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_pyx_unpickle_CompressedBuffere); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 15, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__pyx_t_4 = PyTuple_New(3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 15, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
__Pyx_INCREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
__Pyx_GIVEREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
PyTuple_SET_ITEM(__pyx_t_4, 0, ((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))));
__Pyx_INCREF(__pyx_int_17355272);
__Pyx_GIVEREF(__pyx_int_17355272);
PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_int_17355272);
__Pyx_INCREF(__pyx_v_state);
__Pyx_GIVEREF(__pyx_v_state);
PyTuple_SET_ITEM(__pyx_t_4, 2, __pyx_v_state);
__pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 15, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_GIVEREF(__pyx_t_2);
PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_2);
__Pyx_GIVEREF(__pyx_t_4);
PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_4);
__pyx_t_2 = 0;
__pyx_t_4 = 0;
__pyx_r = __pyx_t_3;
__pyx_t_3 = 0;
goto __pyx_L0;
}
/* "(tree fragment)":1
* def __reduce_cython__(self): # <<<<<<<<<<<<<<
* cdef tuple state
* cdef object _dict
*/
/* function exit code */
__pyx_L1_error:;
__Pyx_XDECREF(__pyx_t_1);
__Pyx_XDECREF(__pyx_t_2);
__Pyx_XDECREF(__pyx_t_3);
__Pyx_XDECREF(__pyx_t_4);
__Pyx_AddTraceback("clickhouse_driver.bufferedwriter.CompressedBufferedWriter.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
__pyx_r = NULL;
__pyx_L0:;
__Pyx_XDECREF(__pyx_v_state);
__Pyx_XDECREF(__pyx_v__dict);
__Pyx_XGIVEREF(__pyx_r);
__Pyx_RefNannyFinishContext();
return __pyx_r;
} | 1056 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_pw_17clickhouse_driver_14bufferedwriter_14BufferedWriter_1__init__ | __pyx_pw_17clickhouse_driver_14bufferedwriter_14BufferedWriter_1__init__( PyObject * __pyx_v_self , PyObject * __pyx_args , PyObject * __pyx_kwds) | ['__pyx_v_self', '__pyx_args', '__pyx_kwds'] | static int __pyx_pw_17clickhouse_driver_14bufferedwriter_14BufferedWriter_1__init__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
Py_ssize_t __pyx_v_bufsize;
int __pyx_r;
__Pyx_RefNannyDeclarations
__Pyx_RefNannySetupContext("__init__ (wrapper)", 0);
{
static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_bufsize,0};
PyObject* values[1] = {0};
if (unlikely(__pyx_kwds)) {
Py_ssize_t kw_args;
const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
switch (pos_args) {
case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
CYTHON_FALLTHROUGH;
case 0: break;
default: goto __pyx_L5_argtuple_error;
}
kw_args = PyDict_Size(__pyx_kwds);
switch (pos_args) {
case 0:
if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_bufsize)) != 0)) kw_args--;
else goto __pyx_L5_argtuple_error;
}
if (unlikely(kw_args > 0)) {
if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__init__") < 0)) __PYX_ERR(0, 12, __pyx_L3_error)
}
} else if (PyTuple_GET_SIZE(__pyx_args) != 1) {
goto __pyx_L5_argtuple_error;
} else {
values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
}
__pyx_v_bufsize = __Pyx_PyIndex_AsSsize_t(values[0]); if (unlikely((__pyx_v_bufsize == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 12, __pyx_L3_error)
}
goto __pyx_L4_argument_unpacking_done;
__pyx_L5_argtuple_error:;
__Pyx_RaiseArgtupleInvalid("__init__", 1, 1, 1, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 12, __pyx_L3_error)
__pyx_L3_error:;
__Pyx_AddTraceback("clickhouse_driver.bufferedwriter.BufferedWriter.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename);
__Pyx_RefNannyFinishContext();
return -1;
__pyx_L4_argument_unpacking_done:;
__pyx_r = __pyx_pf_17clickhouse_driver_14bufferedwriter_14BufferedWriter___init__(((struct __pyx_obj_17clickhouse_driver_14bufferedwriter_BufferedWriter *)__pyx_v_self), __pyx_v_bufsize);
/* function exit code */
__Pyx_RefNannyFinishContext();
return __pyx_r;
} | 341 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_pymod_exec_bufferedwriter | __pyx_pymod_exec_bufferedwriter( PyObject * __pyx_pyinit_module) | ['__pyx_pyinit_module'] | static CYTHON_SMALL_CODE int __pyx_pymod_exec_bufferedwriter(PyObject *__pyx_pyinit_module)
#endif
#endif
{
PyObject *__pyx_t_1 = NULL;
PyObject *__pyx_t_2 = NULL;
__Pyx_RefNannyDeclarations
#if CYTHON_PEP489_MULTI_PHASE_INIT
if (__pyx_m) {
if (__pyx_m == __pyx_pyinit_module) return 0;
PyErr_SetString(PyExc_RuntimeError, "Module 'bufferedwriter' has already been imported. Re-initialisation is not supported.");
return -1;
}
#elif PY_MAJOR_VERSION >= 3
if (__pyx_m) return __Pyx_NewRef(__pyx_m);
#endif
#if CYTHON_REFNANNY
__Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny");
if (!__Pyx_RefNanny) {
PyErr_Clear();
__Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny");
if (!__Pyx_RefNanny)
Py_FatalError("failed to import 'refnanny' module");
}
#endif
__Pyx_RefNannySetupContext("__Pyx_PyMODINIT_FUNC PyInit_bufferedwriter(void)", 0);
if (__Pyx_check_binary_version() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#ifdef __Pxy_PyFrame_Initialize_Offsets
__Pxy_PyFrame_Initialize_Offsets();
#endif
__pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) __PYX_ERR(0, 1, __pyx_L1_error)
__pyx_empty_bytes = PyBytes_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) __PYX_ERR(0, 1, __pyx_L1_error)
__pyx_empty_unicode = PyUnicode_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_unicode)) __PYX_ERR(0, 1, __pyx_L1_error)
#ifdef __Pyx_CyFunction_USED
if (__pyx_CyFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
#ifdef __Pyx_FusedFunction_USED
if (__pyx_FusedFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
#ifdef __Pyx_Coroutine_USED
if (__pyx_Coroutine_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
#ifdef __Pyx_Generator_USED
if (__pyx_Generator_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
#ifdef __Pyx_AsyncGen_USED
if (__pyx_AsyncGen_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
#ifdef __Pyx_StopAsyncIteration_USED
if (__pyx_StopAsyncIteration_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
/*--- Library function declarations ---*/
/*--- Threads initialization code ---*/
#if defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS
#ifdef WITH_THREAD /* Python build with threading support? */
PyEval_InitThreads();
#endif
#endif
/*--- Module creation code ---*/
#if CYTHON_PEP489_MULTI_PHASE_INIT
__pyx_m = __pyx_pyinit_module;
Py_INCREF(__pyx_m);
#else
#if PY_MAJOR_VERSION < 3
__pyx_m = Py_InitModule4("bufferedwriter", __pyx_methods, 0, 0, PYTHON_API_VERSION); Py_XINCREF(__pyx_m);
#else
__pyx_m = PyModule_Create(&__pyx_moduledef);
#endif
if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
__pyx_d = PyModule_GetDict(__pyx_m); if (unlikely(!__pyx_d)) __PYX_ERR(0, 1, __pyx_L1_error)
Py_INCREF(__pyx_d);
__pyx_b = PyImport_AddModule(__Pyx_BUILTIN_MODULE_NAME); if (unlikely(!__pyx_b)) __PYX_ERR(0, 1, __pyx_L1_error)
Py_INCREF(__pyx_b);
__pyx_cython_runtime = PyImport_AddModule((char *) "cython_runtime"); if (unlikely(!__pyx_cython_runtime)) __PYX_ERR(0, 1, __pyx_L1_error)
Py_INCREF(__pyx_cython_runtime);
if (PyObject_SetAttrString(__pyx_m, "__builtins__", __pyx_b) < 0) __PYX_ERR(0, 1, __pyx_L1_error);
/*--- Initialize various global constants etc. ---*/
if (__Pyx_InitGlobals() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#if PY_MAJOR_VERSION < 3 && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT)
if (__Pyx_init_sys_getdefaultencoding_params() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
if (__pyx_module_is_main_clickhouse_driver__bufferedwriter) {
if (PyObject_SetAttr(__pyx_m, __pyx_n_s_name, __pyx_n_s_main) < 0) __PYX_ERR(0, 1, __pyx_L1_error)
}
#if PY_MAJOR_VERSION >= 3
{
PyObject *modules = PyImport_GetModuleDict(); if (unlikely(!modules)) __PYX_ERR(0, 1, __pyx_L1_error)
if (!PyDict_GetItemString(modules, "clickhouse_driver.bufferedwriter")) {
if (unlikely(PyDict_SetItemString(modules, "clickhouse_driver.bufferedwriter", __pyx_m) < 0)) __PYX_ERR(0, 1, __pyx_L1_error)
}
}
#endif
/*--- Builtin init code ---*/
if (__Pyx_InitCachedBuiltins() < 0) goto __pyx_L1_error;
/*--- Constants init code ---*/
if (__Pyx_InitCachedConstants() < 0) goto __pyx_L1_error;
/*--- Global type/function init code ---*/
(void)__Pyx_modinit_global_init_code();
(void)__Pyx_modinit_variable_export_code();
(void)__Pyx_modinit_function_export_code();
if (unlikely(__Pyx_modinit_type_init_code() != 0)) goto __pyx_L1_error;
if (unlikely(__Pyx_modinit_type_import_code() != 0)) goto __pyx_L1_error;
(void)__Pyx_modinit_variable_import_code();
(void)__Pyx_modinit_function_import_code();
/*--- Execution code ---*/
#if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED)
if (__Pyx_patch_abc() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
/* "clickhouse_driver/bufferedwriter.pyx":5
* from libc.string cimport memcpy
*
* from .varint import write_varint # <<<<<<<<<<<<<<
*
*
*/
__pyx_t_1 = PyList_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__Pyx_INCREF(__pyx_n_s_write_varint);
__Pyx_GIVEREF(__pyx_n_s_write_varint);
PyList_SET_ITEM(__pyx_t_1, 0, __pyx_n_s_write_varint);
__pyx_t_2 = __Pyx_Import(__pyx_n_s_varint, __pyx_t_1, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
__pyx_t_1 = __Pyx_ImportFrom(__pyx_t_2, __pyx_n_s_write_varint); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 5, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
if (PyDict_SetItem(__pyx_d, __pyx_n_s_write_varint, __pyx_t_1) < 0) __PYX_ERR(0, 5, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
/* "(tree fragment)":1
* def __pyx_unpickle_BufferedWriter(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<<
* cdef object __pyx_PickleError
* cdef object __pyx_result
*/
__pyx_t_2 = PyCFunction_NewEx(&__pyx_mdef_17clickhouse_driver_14bufferedwriter_1__pyx_unpickle_BufferedWriter, NULL, __pyx_n_s_clickhouse_driver_bufferedwriter); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
if (PyDict_SetItem(__pyx_d, __pyx_n_s_pyx_unpickle_BufferedWriter, __pyx_t_2) < 0) __PYX_ERR(1, 1, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
/* "(tree fragment)":11
* __pyx_unpickle_BufferedWriter__set_state(<BufferedWriter> __pyx_result, __pyx_state)
* return __pyx_result
* cdef __pyx_unpickle_BufferedWriter__set_state(BufferedWriter __pyx_result, tuple __pyx_state): # <<<<<<<<<<<<<<
* __pyx_result.buffer = __pyx_state[0]; __pyx_result.buffer_size = __pyx_state[1]; __pyx_result.position = __pyx_state[2]
* if len(__pyx_state) > 3 and hasattr(__pyx_result, '__dict__'):
*/
__pyx_t_2 = PyCFunction_NewEx(&__pyx_mdef_17clickhouse_driver_14bufferedwriter_3__pyx_unpickle_BufferedSocketWriter, NULL, __pyx_n_s_clickhouse_driver_bufferedwriter); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
if (PyDict_SetItem(__pyx_d, __pyx_n_s_pyx_unpickle_BufferedSocketWri, __pyx_t_2) < 0) __PYX_ERR(1, 1, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
/* "(tree fragment)":1
* def __pyx_unpickle_CompressedBufferedWriter(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<<
* cdef object __pyx_PickleError
* cdef object __pyx_result
*/
__pyx_t_2 = PyCFunction_NewEx(&__pyx_mdef_17clickhouse_driver_14bufferedwriter_5__pyx_unpickle_CompressedBufferedWriter, NULL, __pyx_n_s_clickhouse_driver_bufferedwriter); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
if (PyDict_SetItem(__pyx_d, __pyx_n_s_pyx_unpickle_CompressedBuffere, __pyx_t_2) < 0) __PYX_ERR(1, 1, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
/* "clickhouse_driver/bufferedwriter.pyx":1
* from cpython cimport PyMem_Malloc, PyMem_Free, PyBytes_AsString, \ # <<<<<<<<<<<<<<
* PyBytes_Check, PyBytes_FromStringAndSize
* from libc.string cimport memcpy
*/
__pyx_t_2 = __Pyx_PyDict_NewPresized(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 1, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
if (PyDict_SetItem(__pyx_d, __pyx_n_s_test, __pyx_t_2) < 0) __PYX_ERR(0, 1, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
/*--- Wrapped vars code ---*/
goto __pyx_L0;
__pyx_L1_error:;
__Pyx_XDECREF(__pyx_t_1);
__Pyx_XDECREF(__pyx_t_2);
if (__pyx_m) {
if (__pyx_d) {
__Pyx_AddTraceback("init clickhouse_driver.bufferedwriter", __pyx_clineno, __pyx_lineno, __pyx_filename);
}
Py_CLEAR(__pyx_m);
} else if (!PyErr_Occurred()) {
PyErr_SetString(PyExc_ImportError, "init clickhouse_driver.bufferedwriter");
}
__pyx_L0:;
__Pyx_RefNannyFinishContext();
#if CYTHON_PEP489_MULTI_PHASE_INIT
return (__pyx_m != NULL) ? 0 : -1;
#elif PY_MAJOR_VERSION >= 3
return __pyx_m;
#else
return;
#endif
} | 1166 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_tp_dealloc_17clickhouse_driver_14bufferedwriter_BufferedWriter | __pyx_tp_dealloc_17clickhouse_driver_14bufferedwriter_BufferedWriter( PyObject * o) | ['o'] | static void __pyx_tp_dealloc_17clickhouse_driver_14bufferedwriter_BufferedWriter(PyObject *o) {
#if CYTHON_USE_TP_FINALIZE
if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && (!PyType_IS_GC(Py_TYPE(o)) || !_PyGC_FINALIZED(o))) {
if (PyObject_CallFinalizerFromDealloc(o)) return;
}
#endif
{
PyObject *etype, *eval, *etb;
PyErr_Fetch(&etype, &eval, &etb);
++Py_REFCNT(o);
__pyx_pw_17clickhouse_driver_14bufferedwriter_14BufferedWriter_3__dealloc__(o);
--Py_REFCNT(o);
PyErr_Restore(etype, eval, etb);
}
(*Py_TYPE(o)->tp_free)(o);
} | 121 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __Pyx_decode_c_string | __Pyx_decode_c_string( const char * cstring , Py_ssize_t start , Py_ssize_t stop , const char * encoding , const char * errors , PyObject *(*decode_func)(const char*s,Py_ssize_t size,const char*errors)) | ['cstring', 'start', 'stop', 'encoding', 'errors'] | static CYTHON_INLINE PyObject* __Pyx_decode_c_string(
const char* cstring, Py_ssize_t start, Py_ssize_t stop,
const char* encoding, const char* errors,
PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors)) {
Py_ssize_t length;
if (unlikely((start < 0) | (stop < 0))) {
size_t slen = strlen(cstring);
if (unlikely(slen > (size_t) PY_SSIZE_T_MAX)) {
PyErr_SetString(PyExc_OverflowError,
"c-string too long to convert to Python");
return NULL;
}
length = (Py_ssize_t) slen;
if (start < 0) {
start += length;
if (start < 0)
start = 0;
}
if (stop < 0)
stop += length;
}
if (unlikely(stop <= start))
return PyUnicode_FromUnicode(NULL, 0);
length = stop - start;
cstring += start;
if (decode_func) {
return decode_func(cstring, length, errors);
} else {
return PyUnicode_Decode(cstring, length, encoding, errors);
}
} | 197 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_pymod_exec_stringcolumn | __pyx_pymod_exec_stringcolumn( PyObject * __pyx_pyinit_module) | ['__pyx_pyinit_module'] | static CYTHON_SMALL_CODE int __pyx_pymod_exec_stringcolumn(PyObject *__pyx_pyinit_module)
#endif
#endif
{
PyObject *__pyx_t_1 = NULL;
PyObject *__pyx_t_2 = NULL;
PyObject *__pyx_t_3 = NULL;
PyObject *__pyx_t_4 = NULL;
PyObject *__pyx_t_5 = NULL;
__Pyx_RefNannyDeclarations
#if CYTHON_PEP489_MULTI_PHASE_INIT
if (__pyx_m) {
if (__pyx_m == __pyx_pyinit_module) return 0;
PyErr_SetString(PyExc_RuntimeError, "Module 'stringcolumn' has already been imported. Re-initialisation is not supported.");
return -1;
}
#elif PY_MAJOR_VERSION >= 3
if (__pyx_m) return __Pyx_NewRef(__pyx_m);
#endif
#if CYTHON_REFNANNY
__Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny");
if (!__Pyx_RefNanny) {
PyErr_Clear();
__Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny");
if (!__Pyx_RefNanny)
Py_FatalError("failed to import 'refnanny' module");
}
#endif
__Pyx_RefNannySetupContext("__Pyx_PyMODINIT_FUNC PyInit_stringcolumn(void)", 0);
if (__Pyx_check_binary_version() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#ifdef __Pxy_PyFrame_Initialize_Offsets
__Pxy_PyFrame_Initialize_Offsets();
#endif
__pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) __PYX_ERR(0, 1, __pyx_L1_error)
__pyx_empty_bytes = PyBytes_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) __PYX_ERR(0, 1, __pyx_L1_error)
__pyx_empty_unicode = PyUnicode_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_unicode)) __PYX_ERR(0, 1, __pyx_L1_error)
#ifdef __Pyx_CyFunction_USED
if (__pyx_CyFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
#ifdef __Pyx_FusedFunction_USED
if (__pyx_FusedFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
#ifdef __Pyx_Coroutine_USED
if (__pyx_Coroutine_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
#ifdef __Pyx_Generator_USED
if (__pyx_Generator_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
#ifdef __Pyx_AsyncGen_USED
if (__pyx_AsyncGen_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
#ifdef __Pyx_StopAsyncIteration_USED
if (__pyx_StopAsyncIteration_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
/*--- Library function declarations ---*/
/*--- Threads initialization code ---*/
#if defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS
#ifdef WITH_THREAD /* Python build with threading support? */
PyEval_InitThreads();
#endif
#endif
/*--- Module creation code ---*/
#if CYTHON_PEP489_MULTI_PHASE_INIT
__pyx_m = __pyx_pyinit_module;
Py_INCREF(__pyx_m);
#else
#if PY_MAJOR_VERSION < 3
__pyx_m = Py_InitModule4("stringcolumn", __pyx_methods, 0, 0, PYTHON_API_VERSION); Py_XINCREF(__pyx_m);
#else
__pyx_m = PyModule_Create(&__pyx_moduledef);
#endif
if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
__pyx_d = PyModule_GetDict(__pyx_m); if (unlikely(!__pyx_d)) __PYX_ERR(0, 1, __pyx_L1_error)
Py_INCREF(__pyx_d);
__pyx_b = PyImport_AddModule(__Pyx_BUILTIN_MODULE_NAME); if (unlikely(!__pyx_b)) __PYX_ERR(0, 1, __pyx_L1_error)
Py_INCREF(__pyx_b);
__pyx_cython_runtime = PyImport_AddModule((char *) "cython_runtime"); if (unlikely(!__pyx_cython_runtime)) __PYX_ERR(0, 1, __pyx_L1_error)
Py_INCREF(__pyx_cython_runtime);
if (PyObject_SetAttrString(__pyx_m, "__builtins__", __pyx_b) < 0) __PYX_ERR(0, 1, __pyx_L1_error);
/*--- Initialize various global constants etc. ---*/
if (__Pyx_InitGlobals() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#if PY_MAJOR_VERSION < 3 && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT)
if (__Pyx_init_sys_getdefaultencoding_params() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
if (__pyx_module_is_main_clickhouse_driver__columns__stringcolumn) {
if (PyObject_SetAttr(__pyx_m, __pyx_n_s_name, __pyx_n_s_main) < 0) __PYX_ERR(0, 1, __pyx_L1_error)
}
#if PY_MAJOR_VERSION >= 3
{
PyObject *modules = PyImport_GetModuleDict(); if (unlikely(!modules)) __PYX_ERR(0, 1, __pyx_L1_error)
if (!PyDict_GetItemString(modules, "clickhouse_driver.columns.stringcolumn")) {
if (unlikely(PyDict_SetItemString(modules, "clickhouse_driver.columns.stringcolumn", __pyx_m) < 0)) __PYX_ERR(0, 1, __pyx_L1_error)
}
}
#endif
/*--- Builtin init code ---*/
if (__Pyx_InitCachedBuiltins() < 0) goto __pyx_L1_error;
/*--- Constants init code ---*/
if (__Pyx_InitCachedConstants() < 0) goto __pyx_L1_error;
/*--- Global type/function init code ---*/
(void)__Pyx_modinit_global_init_code();
(void)__Pyx_modinit_variable_export_code();
(void)__Pyx_modinit_function_export_code();
(void)__Pyx_modinit_type_init_code();
if (unlikely(__Pyx_modinit_type_import_code() != 0)) goto __pyx_L1_error;
(void)__Pyx_modinit_variable_import_code();
(void)__Pyx_modinit_function_import_code();
/*--- Execution code ---*/
#if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED)
if (__Pyx_patch_abc() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
/* "clickhouse_driver/columns/stringcolumn.pyx":9
* from libc.string cimport memcpy, memset
*
* from .. import defines # <<<<<<<<<<<<<<
* from .. import errors
* from ..util import compat
*/
__pyx_t_1 = PyList_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 9, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__Pyx_INCREF(__pyx_n_s_defines);
__Pyx_GIVEREF(__pyx_n_s_defines);
PyList_SET_ITEM(__pyx_t_1, 0, __pyx_n_s_defines);
__pyx_t_2 = __Pyx_Import(__pyx_n_s__2, __pyx_t_1, 2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 9, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
__pyx_t_1 = __Pyx_ImportFrom(__pyx_t_2, __pyx_n_s_defines); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 9, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
if (PyDict_SetItem(__pyx_d, __pyx_n_s_defines, __pyx_t_1) < 0) __PYX_ERR(0, 9, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
/* "clickhouse_driver/columns/stringcolumn.pyx":10
*
* from .. import defines
* from .. import errors # <<<<<<<<<<<<<<
* from ..util import compat
* from .base import Column
*/
__pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 10, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__Pyx_INCREF(__pyx_n_s_errors);
__Pyx_GIVEREF(__pyx_n_s_errors);
PyList_SET_ITEM(__pyx_t_2, 0, __pyx_n_s_errors);
__pyx_t_1 = __Pyx_Import(__pyx_n_s__2, __pyx_t_2, 2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 10, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
__pyx_t_2 = __Pyx_ImportFrom(__pyx_t_1, __pyx_n_s_errors); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 10, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
if (PyDict_SetItem(__pyx_d, __pyx_n_s_errors, __pyx_t_2) < 0) __PYX_ERR(0, 10, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
/* "clickhouse_driver/columns/stringcolumn.pyx":11
* from .. import defines
* from .. import errors
* from ..util import compat # <<<<<<<<<<<<<<
* from .base import Column
*
*/
__pyx_t_1 = PyList_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 11, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__Pyx_INCREF(__pyx_n_s_compat);
__Pyx_GIVEREF(__pyx_n_s_compat);
PyList_SET_ITEM(__pyx_t_1, 0, __pyx_n_s_compat);
__pyx_t_2 = __Pyx_Import(__pyx_n_s_util, __pyx_t_1, 2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 11, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
__pyx_t_1 = __Pyx_ImportFrom(__pyx_t_2, __pyx_n_s_compat); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 11, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
if (PyDict_SetItem(__pyx_d, __pyx_n_s_compat, __pyx_t_1) < 0) __PYX_ERR(0, 11, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
/* "clickhouse_driver/columns/stringcolumn.pyx":12
* from .. import errors
* from ..util import compat
* from .base import Column # <<<<<<<<<<<<<<
*
*
*/
__pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 12, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__Pyx_INCREF(__pyx_n_s_Column);
__Pyx_GIVEREF(__pyx_n_s_Column);
PyList_SET_ITEM(__pyx_t_2, 0, __pyx_n_s_Column);
__pyx_t_1 = __Pyx_Import(__pyx_n_s_base, __pyx_t_2, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 12, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
__pyx_t_2 = __Pyx_ImportFrom(__pyx_t_1, __pyx_n_s_Column); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 12, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
if (PyDict_SetItem(__pyx_d, __pyx_n_s_Column, __pyx_t_2) < 0) __PYX_ERR(0, 12, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
/* "clickhouse_driver/columns/stringcolumn.pyx":15
*
*
* class String(Column): # <<<<<<<<<<<<<<
* ch_type = 'String'
* py_types = compat.string_types
*/
__Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_Column); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 15, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 15, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__Pyx_GIVEREF(__pyx_t_1);
PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_1);
__pyx_t_1 = 0;
__pyx_t_1 = __Pyx_CalculateMetaclass(NULL, __pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 15, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__pyx_t_3 = __Pyx_Py3MetaclassPrepare(__pyx_t_1, __pyx_t_2, __pyx_n_s_String, __pyx_n_s_String, (PyObject *) NULL, __pyx_n_s_clickhouse_driver_columns_string, (PyObject *) NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 15, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
/* "clickhouse_driver/columns/stringcolumn.pyx":16
*
* class String(Column):
* ch_type = 'String' # <<<<<<<<<<<<<<
* py_types = compat.string_types
* null_value = ''
*/
if (__Pyx_SetNameInClass(__pyx_t_3, __pyx_n_s_ch_type, __pyx_n_u_String) < 0) __PYX_ERR(0, 16, __pyx_L1_error)
/* "clickhouse_driver/columns/stringcolumn.pyx":17
* class String(Column):
* ch_type = 'String'
* py_types = compat.string_types # <<<<<<<<<<<<<<
* null_value = ''
*
*/
__Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_compat); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 17, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
__pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_string_types); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 17, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_5);
__Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
if (__Pyx_SetNameInClass(__pyx_t_3, __pyx_n_s_py_types, __pyx_t_5) < 0) __PYX_ERR(0, 17, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
/* "clickhouse_driver/columns/stringcolumn.pyx":18
* ch_type = 'String'
* py_types = compat.string_types
* null_value = '' # <<<<<<<<<<<<<<
*
* default_encoding = defines.STRINGS_ENCODING
*/
if (__Pyx_SetNameInClass(__pyx_t_3, __pyx_n_s_null_value, __pyx_kp_u__2) < 0) __PYX_ERR(0, 18, __pyx_L1_error)
/* "clickhouse_driver/columns/stringcolumn.pyx":20
* null_value = ''
*
* default_encoding = defines.STRINGS_ENCODING # <<<<<<<<<<<<<<
*
* def __init__(self, encoding=default_encoding, **kwargs):
*/
__Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_defines); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 20, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_5);
__pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_STRINGS_ENCODING); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 20, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
__Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
if (__Pyx_SetNameInClass(__pyx_t_3, __pyx_n_s_default_encoding, __pyx_t_4) < 0) __PYX_ERR(0, 20, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
/* "clickhouse_driver/columns/stringcolumn.pyx":22
* default_encoding = defines.STRINGS_ENCODING
*
* def __init__(self, encoding=default_encoding, **kwargs): # <<<<<<<<<<<<<<
* self.encoding = encoding
* super(String, self).__init__(**kwargs)
*/
__pyx_t_4 = __Pyx_CyFunction_New(&__pyx_mdef_17clickhouse_driver_7columns_12stringcolumn_6String_1__init__, 0, __pyx_n_s_String___init, NULL, __pyx_n_s_clickhouse_driver_columns_string, __pyx_d, ((PyObject *)__pyx_codeobj__4)); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 22, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
if (!__Pyx_CyFunction_InitDefaults(__pyx_t_4, sizeof(__pyx_defaults), 1)) __PYX_ERR(0, 22, __pyx_L1_error)
__pyx_t_5 = PyObject_GetItem(__pyx_t_3, __pyx_n_s_default_encoding);
if (unlikely(!__pyx_t_5)) {
PyErr_Clear();
__Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_default_encoding);
}
if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 22, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_5);
__Pyx_CyFunction_Defaults(__pyx_defaults, __pyx_t_4)->__pyx_arg_encoding = __pyx_t_5;
__Pyx_GIVEREF(__pyx_t_5);
__pyx_t_5 = 0;
__Pyx_CyFunction_SetDefaultsGetter(__pyx_t_4, __pyx_pf_17clickhouse_driver_7columns_12stringcolumn_2__defaults__);
if (__Pyx_SetNameInClass(__pyx_t_3, __pyx_n_s_init, __pyx_t_4) < 0) __PYX_ERR(0, 22, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
/* "clickhouse_driver/columns/stringcolumn.pyx":26
* super(String, self).__init__(**kwargs)
*
* def write_items(self, items, buf): # <<<<<<<<<<<<<<
* buf.write_strings(items, encoding=self.encoding)
*
*/
__pyx_t_4 = __Pyx_CyFunction_New(&__pyx_mdef_17clickhouse_driver_7columns_12stringcolumn_6String_3write_items, 0, __pyx_n_s_String_write_items, NULL, __pyx_n_s_clickhouse_driver_columns_string, __pyx_d, ((PyObject *)__pyx_codeobj__6)); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 26, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
if (__Pyx_SetNameInClass(__pyx_t_3, __pyx_n_s_write_items, __pyx_t_4) < 0) __PYX_ERR(0, 26, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
/* "clickhouse_driver/columns/stringcolumn.pyx":29
* buf.write_strings(items, encoding=self.encoding)
*
* def read_items(self, n_items, buf): # <<<<<<<<<<<<<<
* return buf.read_strings(n_items, encoding=self.encoding)
*
*/
__pyx_t_4 = __Pyx_CyFunction_New(&__pyx_mdef_17clickhouse_driver_7columns_12stringcolumn_6String_5read_items, 0, __pyx_n_s_String_read_items, NULL, __pyx_n_s_clickhouse_driver_columns_string, __pyx_d, ((PyObject *)__pyx_codeobj__8)); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 29, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
if (__Pyx_SetNameInClass(__pyx_t_3, __pyx_n_s_read_items, __pyx_t_4) < 0) __PYX_ERR(0, 29, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
/* "clickhouse_driver/columns/stringcolumn.pyx":15
*
*
* class String(Column): # <<<<<<<<<<<<<<
* ch_type = 'String'
* py_types = compat.string_types
*/
__pyx_t_4 = __Pyx_Py3ClassCreate(__pyx_t_1, __pyx_n_s_String, __pyx_t_2, __pyx_t_3, NULL, 0, 0); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 15, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
if (PyDict_SetItem(__pyx_d, __pyx_n_s_String, __pyx_t_4) < 0) __PYX_ERR(0, 15, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
__Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
/* "clickhouse_driver/columns/stringcolumn.pyx":33
*
*
* class ByteString(String): # <<<<<<<<<<<<<<
* py_types = (bytes, )
* null_value = b''
*/
__Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_String); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 33, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 33, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__Pyx_GIVEREF(__pyx_t_2);
PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_2);
__pyx_t_2 = 0;
__pyx_t_2 = __Pyx_CalculateMetaclass(NULL, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 33, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__pyx_t_3 = __Pyx_Py3MetaclassPrepare(__pyx_t_2, __pyx_t_1, __pyx_n_s_ByteString, __pyx_n_s_ByteString, (PyObject *) NULL, __pyx_n_s_clickhouse_driver_columns_string, (PyObject *) NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 33, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
/* "clickhouse_driver/columns/stringcolumn.pyx":34
*
* class ByteString(String):
* py_types = (bytes, ) # <<<<<<<<<<<<<<
* null_value = b''
*
*/
__pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 34, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
__Pyx_INCREF(((PyObject *)(&PyBytes_Type)));
__Pyx_GIVEREF(((PyObject *)(&PyBytes_Type)));
PyTuple_SET_ITEM(__pyx_t_4, 0, ((PyObject *)(&PyBytes_Type)));
if (__Pyx_SetNameInClass(__pyx_t_3, __pyx_n_s_py_types, __pyx_t_4) < 0) __PYX_ERR(0, 34, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
/* "clickhouse_driver/columns/stringcolumn.pyx":35
* class ByteString(String):
* py_types = (bytes, )
* null_value = b'' # <<<<<<<<<<<<<<
*
* def write_items(self, items, buf):
*/
if (__Pyx_SetNameInClass(__pyx_t_3, __pyx_n_s_null_value, __pyx_kp_b__2) < 0) __PYX_ERR(0, 35, __pyx_L1_error)
/* "clickhouse_driver/columns/stringcolumn.pyx":37
* null_value = b''
*
* def write_items(self, items, buf): # <<<<<<<<<<<<<<
* buf.write_strings(items)
*
*/
__pyx_t_4 = __Pyx_CyFunction_New(&__pyx_mdef_17clickhouse_driver_7columns_12stringcolumn_10ByteString_1write_items, 0, __pyx_n_s_ByteString_write_items, NULL, __pyx_n_s_clickhouse_driver_columns_string, __pyx_d, ((PyObject *)__pyx_codeobj__10)); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 37, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
if (__Pyx_SetNameInClass(__pyx_t_3, __pyx_n_s_write_items, __pyx_t_4) < 0) __PYX_ERR(0, 37, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
/* "clickhouse_driver/columns/stringcolumn.pyx":40
* buf.write_strings(items)
*
* def read_items(self, n_items, buf): # <<<<<<<<<<<<<<
* return buf.read_strings(n_items)
*
*/
__pyx_t_4 = __Pyx_CyFunction_New(&__pyx_mdef_17clickhouse_driver_7columns_12stringcolumn_10ByteString_3read_items, 0, __pyx_n_s_ByteString_read_items, NULL, __pyx_n_s_clickhouse_driver_columns_string, __pyx_d, ((PyObject *)__pyx_codeobj__12)); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 40, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
if (__Pyx_SetNameInClass(__pyx_t_3, __pyx_n_s_read_items, __pyx_t_4) < 0) __PYX_ERR(0, 40, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
/* "clickhouse_driver/columns/stringcolumn.pyx":33
*
*
* class ByteString(String): # <<<<<<<<<<<<<<
* py_types = (bytes, )
* null_value = b''
*/
__pyx_t_4 = __Pyx_Py3ClassCreate(__pyx_t_2, __pyx_n_s_ByteString, __pyx_t_1, __pyx_t_3, NULL, 0, 0); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 33, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
if (PyDict_SetItem(__pyx_d, __pyx_n_s_ByteString, __pyx_t_4) < 0) __PYX_ERR(0, 33, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
__Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
/* "clickhouse_driver/columns/stringcolumn.pyx":44
*
*
* class FixedString(String): # <<<<<<<<<<<<<<
* ch_type = 'FixedString'
*
*/
__Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_n_s_String); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 44, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 44, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__Pyx_GIVEREF(__pyx_t_1);
PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_1);
__pyx_t_1 = 0;
__pyx_t_1 = __Pyx_CalculateMetaclass(NULL, __pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 44, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__pyx_t_3 = __Pyx_Py3MetaclassPrepare(__pyx_t_1, __pyx_t_2, __pyx_n_s_FixedString, __pyx_n_s_FixedString, (PyObject *) NULL, __pyx_n_s_clickhouse_driver_columns_string, (PyObject *) NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 44, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
/* "clickhouse_driver/columns/stringcolumn.pyx":45
*
* class FixedString(String):
* ch_type = 'FixedString' # <<<<<<<<<<<<<<
*
* def __init__(self, length, **kwargs):
*/
if (__Pyx_SetNameInClass(__pyx_t_3, __pyx_n_s_ch_type, __pyx_n_u_FixedString) < 0) __PYX_ERR(0, 45, __pyx_L1_error)
/* "clickhouse_driver/columns/stringcolumn.pyx":47
* ch_type = 'FixedString'
*
* def __init__(self, length, **kwargs): # <<<<<<<<<<<<<<
* self.length = length
* super(FixedString, self).__init__(**kwargs)
*/
__pyx_t_4 = __Pyx_CyFunction_New(&__pyx_mdef_17clickhouse_driver_7columns_12stringcolumn_11FixedString_1__init__, 0, __pyx_n_s_FixedString___init, NULL, __pyx_n_s_clickhouse_driver_columns_string, __pyx_d, ((PyObject *)__pyx_codeobj__14)); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 47, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
if (__Pyx_SetNameInClass(__pyx_t_3, __pyx_n_s_init, __pyx_t_4) < 0) __PYX_ERR(0, 47, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
/* "clickhouse_driver/columns/stringcolumn.pyx":51
* super(FixedString, self).__init__(**kwargs)
*
* def read_items(self, Py_ssize_t n_items, buf): # <<<<<<<<<<<<<<
* cdef Py_ssize_t i, j, length = self.length
* encoding = self.encoding.encode('utf-8')
*/
__pyx_t_4 = __Pyx_CyFunction_New(&__pyx_mdef_17clickhouse_driver_7columns_12stringcolumn_11FixedString_3read_items, 0, __pyx_n_s_FixedString_read_items, NULL, __pyx_n_s_clickhouse_driver_columns_string, __pyx_d, ((PyObject *)__pyx_codeobj__16)); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 51, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
if (__Pyx_SetNameInClass(__pyx_t_3, __pyx_n_s_read_items, __pyx_t_4) < 0) __PYX_ERR(0, 51, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
/* "clickhouse_driver/columns/stringcolumn.pyx":83
* return items
*
* def write_items(self, items, buf): # <<<<<<<<<<<<<<
* cdef Py_ssize_t buf_pos = 0
* cdef Py_ssize_t length = self.length
*/
__pyx_t_4 = __Pyx_CyFunction_New(&__pyx_mdef_17clickhouse_driver_7columns_12stringcolumn_11FixedString_5write_items, 0, __pyx_n_s_FixedString_write_items, NULL, __pyx_n_s_clickhouse_driver_columns_string, __pyx_d, ((PyObject *)__pyx_codeobj__18)); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 83, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
if (__Pyx_SetNameInClass(__pyx_t_3, __pyx_n_s_write_items, __pyx_t_4) < 0) __PYX_ERR(0, 83, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
/* "clickhouse_driver/columns/stringcolumn.pyx":44
*
*
* class FixedString(String): # <<<<<<<<<<<<<<
* ch_type = 'FixedString'
*
*/
__pyx_t_4 = __Pyx_Py3ClassCreate(__pyx_t_1, __pyx_n_s_FixedString, __pyx_t_2, __pyx_t_3, NULL, 0, 0); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 44, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
if (PyDict_SetItem(__pyx_d, __pyx_n_s_FixedString, __pyx_t_4) < 0) __PYX_ERR(0, 44, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
__Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
/* "clickhouse_driver/columns/stringcolumn.pyx":114
*
*
* class ByteFixedString(FixedString): # <<<<<<<<<<<<<<
* py_types = (bytearray, bytes)
* null_value = b''
*/
__Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_FixedString); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 114, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 114, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__Pyx_GIVEREF(__pyx_t_2);
PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_2);
__pyx_t_2 = 0;
__pyx_t_2 = __Pyx_CalculateMetaclass(NULL, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 114, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__pyx_t_3 = __Pyx_Py3MetaclassPrepare(__pyx_t_2, __pyx_t_1, __pyx_n_s_ByteFixedString, __pyx_n_s_ByteFixedString, (PyObject *) NULL, __pyx_n_s_clickhouse_driver_columns_string, (PyObject *) NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 114, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
/* "clickhouse_driver/columns/stringcolumn.pyx":115
*
* class ByteFixedString(FixedString):
* py_types = (bytearray, bytes) # <<<<<<<<<<<<<<
* null_value = b''
*
*/
__pyx_t_4 = PyTuple_New(2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 115, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
__Pyx_INCREF(((PyObject *)(&PyByteArray_Type)));
__Pyx_GIVEREF(((PyObject *)(&PyByteArray_Type)));
PyTuple_SET_ITEM(__pyx_t_4, 0, ((PyObject *)(&PyByteArray_Type)));
__Pyx_INCREF(((PyObject *)(&PyBytes_Type)));
__Pyx_GIVEREF(((PyObject *)(&PyBytes_Type)));
PyTuple_SET_ITEM(__pyx_t_4, 1, ((PyObject *)(&PyBytes_Type)));
if (__Pyx_SetNameInClass(__pyx_t_3, __pyx_n_s_py_types, __pyx_t_4) < 0) __PYX_ERR(0, 115, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
/* "clickhouse_driver/columns/stringcolumn.pyx":116
* class ByteFixedString(FixedString):
* py_types = (bytearray, bytes)
* null_value = b'' # <<<<<<<<<<<<<<
*
* def read_items(self, Py_ssize_t n_items, buf):
*/
if (__Pyx_SetNameInClass(__pyx_t_3, __pyx_n_s_null_value, __pyx_kp_b__2) < 0) __PYX_ERR(0, 116, __pyx_L1_error)
/* "clickhouse_driver/columns/stringcolumn.pyx":118
* null_value = b''
*
* def read_items(self, Py_ssize_t n_items, buf): # <<<<<<<<<<<<<<
* cdef Py_ssize_t i
* cdef Py_ssize_t length = self.length
*/
__pyx_t_4 = __Pyx_CyFunction_New(&__pyx_mdef_17clickhouse_driver_7columns_12stringcolumn_15ByteFixedString_1read_items, 0, __pyx_n_s_ByteFixedString_read_items, NULL, __pyx_n_s_clickhouse_driver_columns_string, __pyx_d, ((PyObject *)__pyx_codeobj__20)); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 118, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
if (__Pyx_SetNameInClass(__pyx_t_3, __pyx_n_s_read_items, __pyx_t_4) < 0) __PYX_ERR(0, 118, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
/* "clickhouse_driver/columns/stringcolumn.pyx":131
* return items
*
* def write_items(self, items, buf): # <<<<<<<<<<<<<<
* cdef Py_ssize_t buf_pos = 0
* cdef Py_ssize_t length = self.length
*/
__pyx_t_4 = __Pyx_CyFunction_New(&__pyx_mdef_17clickhouse_driver_7columns_12stringcolumn_15ByteFixedString_3write_items, 0, __pyx_n_s_ByteFixedString_write_items, NULL, __pyx_n_s_clickhouse_driver_columns_string, __pyx_d, ((PyObject *)__pyx_codeobj__22)); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 131, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
if (__Pyx_SetNameInClass(__pyx_t_3, __pyx_n_s_write_items, __pyx_t_4) < 0) __PYX_ERR(0, 131, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
/* "clickhouse_driver/columns/stringcolumn.pyx":114
*
*
* class ByteFixedString(FixedString): # <<<<<<<<<<<<<<
* py_types = (bytearray, bytes)
* null_value = b''
*/
__pyx_t_4 = __Pyx_Py3ClassCreate(__pyx_t_2, __pyx_n_s_ByteFixedString, __pyx_t_1, __pyx_t_3, NULL, 0, 0); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 114, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_4);
if (PyDict_SetItem(__pyx_d, __pyx_n_s_ByteFixedString, __pyx_t_4) < 0) __PYX_ERR(0, 114, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
__Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
/* "clickhouse_driver/columns/stringcolumn.pyx":158
*
*
* def create_string_column(spec, column_options): # <<<<<<<<<<<<<<
* client_settings = column_options['context'].client_settings
* strings_as_bytes = client_settings['strings_as_bytes']
*/
__pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_17clickhouse_driver_7columns_12stringcolumn_1create_string_column, NULL, __pyx_n_s_clickhouse_driver_columns_string); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 158, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
if (PyDict_SetItem(__pyx_d, __pyx_n_s_create_string_column, __pyx_t_1) < 0) __PYX_ERR(0, 158, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
/* "clickhouse_driver/columns/stringcolumn.pyx":1
* from cpython cimport Py_INCREF, PyBytes_AsString, PyBytes_FromStringAndSize, \ # <<<<<<<<<<<<<<
* PyBytes_Check
* # Using python's versions of pure c memory management functions for
*/
__pyx_t_1 = __Pyx_PyDict_NewPresized(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
if (PyDict_SetItem(__pyx_d, __pyx_n_s_test, __pyx_t_1) < 0) __PYX_ERR(0, 1, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
/*--- Wrapped vars code ---*/
goto __pyx_L0;
__pyx_L1_error:;
__Pyx_XDECREF(__pyx_t_1);
__Pyx_XDECREF(__pyx_t_2);
__Pyx_XDECREF(__pyx_t_3);
__Pyx_XDECREF(__pyx_t_4);
__Pyx_XDECREF(__pyx_t_5);
if (__pyx_m) {
if (__pyx_d) {
__Pyx_AddTraceback("init clickhouse_driver.columns.stringcolumn", __pyx_clineno, __pyx_lineno, __pyx_filename);
}
Py_CLEAR(__pyx_m);
} else if (!PyErr_Occurred()) {
PyErr_SetString(PyExc_ImportError, "init clickhouse_driver.columns.stringcolumn");
}
__pyx_L0:;
__Pyx_RefNannyFinishContext();
#if CYTHON_PEP489_MULTI_PHASE_INIT
return (__pyx_m != NULL) ? 0 : -1;
#elif PY_MAJOR_VERSION >= 3
return __pyx_m;
#else
return;
#endif
} | 3957 | True | 1 |
CVE-2020-26759 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/d708ed548e1d6f254ba81a21de8ba543a53b5598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'name': 'https://github.com/mymarilyn/clickhouse-driver/commit/3e990547e064b8fca916b23a0f7d6fe8c63c7f6b', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:clickhouse-driver_project:clickhouse-driver:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.1.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'clickhouse-driver before 0.1.5 allows a malicious clickhouse server to trigger a crash or execute arbitrary code (on a database client) via a crafted server response, due to a buffer overflow.'}] | 2021-01-08T21:19Z | 2021-01-06T13:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Konstantin Lebedev | 2020-09-04 12:46:34+03:00 | Fix malformed read/write in BufferedReader
read_strings and read affected | 3e990547e064b8fca916b23a0f7d6fe8c63c7f6b | False | mymarilyn/clickhouse-driver | ClickHouse Python Driver with native interface support | 2017-05-10 22:13:04 | 2022-08-26 12:08:22 | https://clickhouse-driver.readthedocs.io | mymarilyn | 900.0 | 171.0 | __pyx_pymod_exec_varint | __pyx_pymod_exec_varint( PyObject * __pyx_pyinit_module) | ['__pyx_pyinit_module'] | static CYTHON_SMALL_CODE int __pyx_pymod_exec_varint(PyObject *__pyx_pyinit_module)
#endif
#endif
{
PyObject *__pyx_t_1 = NULL;
__Pyx_RefNannyDeclarations
#if CYTHON_PEP489_MULTI_PHASE_INIT
if (__pyx_m) {
if (__pyx_m == __pyx_pyinit_module) return 0;
PyErr_SetString(PyExc_RuntimeError, "Module 'varint' has already been imported. Re-initialisation is not supported.");
return -1;
}
#elif PY_MAJOR_VERSION >= 3
if (__pyx_m) return __Pyx_NewRef(__pyx_m);
#endif
#if CYTHON_REFNANNY
__Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny");
if (!__Pyx_RefNanny) {
PyErr_Clear();
__Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny");
if (!__Pyx_RefNanny)
Py_FatalError("failed to import 'refnanny' module");
}
#endif
__Pyx_RefNannySetupContext("__Pyx_PyMODINIT_FUNC PyInit_varint(void)", 0);
if (__Pyx_check_binary_version() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#ifdef __Pxy_PyFrame_Initialize_Offsets
__Pxy_PyFrame_Initialize_Offsets();
#endif
__pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) __PYX_ERR(0, 1, __pyx_L1_error)
__pyx_empty_bytes = PyBytes_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) __PYX_ERR(0, 1, __pyx_L1_error)
__pyx_empty_unicode = PyUnicode_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_unicode)) __PYX_ERR(0, 1, __pyx_L1_error)
#ifdef __Pyx_CyFunction_USED
if (__pyx_CyFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
#ifdef __Pyx_FusedFunction_USED
if (__pyx_FusedFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
#ifdef __Pyx_Coroutine_USED
if (__pyx_Coroutine_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
#ifdef __Pyx_Generator_USED
if (__pyx_Generator_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
#ifdef __Pyx_AsyncGen_USED
if (__pyx_AsyncGen_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
#ifdef __Pyx_StopAsyncIteration_USED
if (__pyx_StopAsyncIteration_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
/*--- Library function declarations ---*/
/*--- Threads initialization code ---*/
#if defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS
#ifdef WITH_THREAD /* Python build with threading support? */
PyEval_InitThreads();
#endif
#endif
/*--- Module creation code ---*/
#if CYTHON_PEP489_MULTI_PHASE_INIT
__pyx_m = __pyx_pyinit_module;
Py_INCREF(__pyx_m);
#else
#if PY_MAJOR_VERSION < 3
__pyx_m = Py_InitModule4("varint", __pyx_methods, 0, 0, PYTHON_API_VERSION); Py_XINCREF(__pyx_m);
#else
__pyx_m = PyModule_Create(&__pyx_moduledef);
#endif
if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
__pyx_d = PyModule_GetDict(__pyx_m); if (unlikely(!__pyx_d)) __PYX_ERR(0, 1, __pyx_L1_error)
Py_INCREF(__pyx_d);
__pyx_b = PyImport_AddModule(__Pyx_BUILTIN_MODULE_NAME); if (unlikely(!__pyx_b)) __PYX_ERR(0, 1, __pyx_L1_error)
Py_INCREF(__pyx_b);
__pyx_cython_runtime = PyImport_AddModule((char *) "cython_runtime"); if (unlikely(!__pyx_cython_runtime)) __PYX_ERR(0, 1, __pyx_L1_error)
Py_INCREF(__pyx_cython_runtime);
if (PyObject_SetAttrString(__pyx_m, "__builtins__", __pyx_b) < 0) __PYX_ERR(0, 1, __pyx_L1_error);
/*--- Initialize various global constants etc. ---*/
if (__Pyx_InitGlobals() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#if PY_MAJOR_VERSION < 3 && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT)
if (__Pyx_init_sys_getdefaultencoding_params() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
if (__pyx_module_is_main_clickhouse_driver__varint) {
if (PyObject_SetAttr(__pyx_m, __pyx_n_s_name, __pyx_n_s_main) < 0) __PYX_ERR(0, 1, __pyx_L1_error)
}
#if PY_MAJOR_VERSION >= 3
{
PyObject *modules = PyImport_GetModuleDict(); if (unlikely(!modules)) __PYX_ERR(0, 1, __pyx_L1_error)
if (!PyDict_GetItemString(modules, "clickhouse_driver.varint")) {
if (unlikely(PyDict_SetItemString(modules, "clickhouse_driver.varint", __pyx_m) < 0)) __PYX_ERR(0, 1, __pyx_L1_error)
}
}
#endif
/*--- Builtin init code ---*/
if (__Pyx_InitCachedBuiltins() < 0) goto __pyx_L1_error;
/*--- Constants init code ---*/
if (__Pyx_InitCachedConstants() < 0) goto __pyx_L1_error;
/*--- Global type/function init code ---*/
(void)__Pyx_modinit_global_init_code();
(void)__Pyx_modinit_variable_export_code();
(void)__Pyx_modinit_function_export_code();
(void)__Pyx_modinit_type_init_code();
if (unlikely(__Pyx_modinit_type_import_code() != 0)) goto __pyx_L1_error;
(void)__Pyx_modinit_variable_import_code();
(void)__Pyx_modinit_function_import_code();
/*--- Execution code ---*/
#if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED)
if (__Pyx_patch_abc() < 0) __PYX_ERR(0, 1, __pyx_L1_error)
#endif
/* "clickhouse_driver/varint.pyx":4
*
*
* def write_varint(unsigned long long number, buf): # <<<<<<<<<<<<<<
* """
* Writes integer of variable length using LEB128.
*/
__pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_17clickhouse_driver_6varint_1write_varint, NULL, __pyx_n_s_clickhouse_driver_varint); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 4, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
if (PyDict_SetItem(__pyx_d, __pyx_n_s_write_varint, __pyx_t_1) < 0) __PYX_ERR(0, 4, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
/* "clickhouse_driver/varint.pyx":28
*
*
* def read_varint(f): # <<<<<<<<<<<<<<
* """
* Reads integer of variable length using LEB128.
*/
__pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_17clickhouse_driver_6varint_3read_varint, NULL, __pyx_n_s_clickhouse_driver_varint); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 28, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
if (PyDict_SetItem(__pyx_d, __pyx_n_s_read_varint, __pyx_t_1) < 0) __PYX_ERR(0, 28, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
/* "clickhouse_driver/varint.pyx":1
* from cpython cimport Py_INCREF, PyBytes_FromStringAndSize # <<<<<<<<<<<<<<
*
*
*/
__pyx_t_1 = __Pyx_PyDict_NewPresized(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
if (PyDict_SetItem(__pyx_d, __pyx_n_s_test, __pyx_t_1) < 0) __PYX_ERR(0, 1, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
/*--- Wrapped vars code ---*/
goto __pyx_L0;
__pyx_L1_error:;
__Pyx_XDECREF(__pyx_t_1);
if (__pyx_m) {
if (__pyx_d) {
__Pyx_AddTraceback("init clickhouse_driver.varint", __pyx_clineno, __pyx_lineno, __pyx_filename);
}
Py_CLEAR(__pyx_m);
} else if (!PyErr_Occurred()) {
PyErr_SetString(PyExc_ImportError, "init clickhouse_driver.varint");
}
__pyx_L0:;
__Pyx_RefNannyFinishContext();
#if CYTHON_PEP489_MULTI_PHASE_INIT
return (__pyx_m != NULL) ? 0 : -1;
#elif PY_MAJOR_VERSION >= 3
return __pyx_m;
#else
return;
#endif
} | 928 | True | 1 |
CVE-2020-27153 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:L/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | LOW | LOW | HIGH | 8.6 | HIGH | 3.9 | 4.7 | False | [{'url': 'https://bugzilla.redhat.com/show_bug.cgi?id=1884817', 'name': 'https://bugzilla.redhat.com/show_bug.cgi?id=1884817', 'refsource': 'MISC', 'tags': ['Issue Tracking', 'Third Party Advisory']}, {'url': 'https://github.com/bluez/bluez/commit/5a180f2ec9edfacafd95e5fed20d36fe8e077f07', 'name': 'https://github.com/bluez/bluez/commit/5a180f2ec9edfacafd95e5fed20d36fe8e077f07', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/bluez/bluez/commit/1cd644db8c23a2f530ddb93cebed7dacc5f5721a', 'name': 'https://github.com/bluez/bluez/commit/1cd644db8c23a2f530ddb93cebed7dacc5f5721a', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://lists.debian.org/debian-lts-announce/2020/10/msg00022.html', 'name': '[debian-lts-announce] 20201021 [SECURITY] [DLA 2410-1] bluez security update', 'refsource': 'MLIST', 'tags': ['Mailing List', 'Third Party Advisory']}, {'url': 'https://security.gentoo.org/glsa/202011-01', 'name': 'GLSA-202011-01', 'refsource': 'GENTOO', 'tags': ['Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-11/msg00034.html', 'name': 'openSUSE-SU-2020:1876', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-11/msg00036.html', 'name': 'openSUSE-SU-2020:1880', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}, {'url': 'https://www.debian.org/security/2021/dsa-4951', 'name': 'DSA-4951', 'refsource': 'DEBIAN', 'tags': ['Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-415'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:bluez:bluez:*:*:*:*:*:*:*:*', 'versionEndExcluding': '5.55', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.1:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'In BlueZ before 5.55, a double free was found in the gatttool disconnect_cb() routine from shared/att.c. A remote attacker could potentially cause a denial of service or code execution, during service discovery, due to a redundant disconnect MGMT event.'}] | 2022-04-05T15:59Z | 2020-10-15T03:15Z | Double Free | The product calls free() twice on the same memory address, potentially leading to modification of unexpected memory locations. | When a program calls free() twice with the same argument, the program's memory management data structures become corrupted. This corruption can cause the program to crash or, in some circumstances, cause two later calls to malloc() to return the same pointer. If malloc() returns the same value twice and the program later gives the attacker control over the data that is written into this doubly-allocated memory, the program becomes vulnerable to a buffer overflow attack.
| https://cwe.mitre.org/data/definitions/415.html | 0 | Luiz Augusto von Dentz | 2020-07-15 18:25:37-07:00 | shared/att: Fix possible crash on disconnect
If there are pending request while disconnecting they would be notified
but clients may endup being freed in the proccess which will then be
calling bt_att_cancel to cancal its requests causing the following
trace:
Invalid read of size 4
at 0x1D894C: enable_ccc_callback (gatt-client.c:1627)
by 0x1D247B: disc_att_send_op (att.c:417)
by 0x1CCC17: queue_remove_all (queue.c:354)
by 0x1D47B7: disconnect_cb (att.c:635)
by 0x1E0707: watch_callback (io-glib.c:170)
by 0x48E963B: g_main_context_dispatch (in /usr/lib/libglib-2.0.so.0.6400.4)
by 0x48E9AC7: ??? (in /usr/lib/libglib-2.0.so.0.6400.4)
by 0x48E9ECF: g_main_loop_run (in /usr/lib/libglib-2.0.so.0.6400.4)
by 0x1E0E97: mainloop_run (mainloop-glib.c:79)
by 0x1E13B3: mainloop_run_with_signal (mainloop-notify.c:201)
by 0x12BC3B: main (main.c:770)
Address 0x7d40a28 is 24 bytes inside a block of size 32 free'd
at 0x484A2E0: free (vg_replace_malloc.c:540)
by 0x1CCC17: queue_remove_all (queue.c:354)
by 0x1CCC83: queue_destroy (queue.c:73)
by 0x1D7DD7: bt_gatt_client_free (gatt-client.c:2209)
by 0x16497B: batt_free (battery.c:77)
by 0x16497B: batt_remove (battery.c:286)
by 0x1A0013: service_remove (service.c:176)
by 0x1A9B7B: device_remove_gatt_service (device.c:3691)
by 0x1A9B7B: gatt_service_removed (device.c:3805)
by 0x1CC90B: queue_foreach (queue.c:220)
by 0x1DE27B: notify_service_changed.isra.0.part.0 (gatt-db.c:369)
by 0x1DE387: notify_service_changed (gatt-db.c:361)
by 0x1DE387: gatt_db_service_destroy (gatt-db.c:385)
by 0x1DE3EF: gatt_db_remove_service (gatt-db.c:519)
by 0x1D674F: discovery_op_complete (gatt-client.c:388)
by 0x1D6877: discover_primary_cb (gatt-client.c:1260)
by 0x1E220B: discovery_op_complete (gatt-helpers.c:628)
by 0x1E249B: read_by_grp_type_cb (gatt-helpers.c:730)
by 0x1D247B: disc_att_send_op (att.c:417)
by 0x1CCC17: queue_remove_all (queue.c:354)
by 0x1D47B7: disconnect_cb (att.c:635) | 1cd644db8c23a2f530ddb93cebed7dacc5f5721a | False | bluez/bluez | Main BlueZ tree | 2019-11-05 10:32:32 | 2022-08-24 12:13:09 | null | bluez | 327.0 | 143.0 | cancel_att_send_op | cancel_att_send_op( struct att_send_op * op) | ['op'] | static void cancel_att_send_op(struct att_send_op *op)
{
if (op->destroy)
op->destroy(op->user_data);
op->user_data = NULL;
op->callback = NULL;
op->destroy = NULL;
} | 42 | True | 1 |
CVE-2020-27153 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:L/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | LOW | LOW | HIGH | 8.6 | HIGH | 3.9 | 4.7 | False | [{'url': 'https://bugzilla.redhat.com/show_bug.cgi?id=1884817', 'name': 'https://bugzilla.redhat.com/show_bug.cgi?id=1884817', 'refsource': 'MISC', 'tags': ['Issue Tracking', 'Third Party Advisory']}, {'url': 'https://github.com/bluez/bluez/commit/5a180f2ec9edfacafd95e5fed20d36fe8e077f07', 'name': 'https://github.com/bluez/bluez/commit/5a180f2ec9edfacafd95e5fed20d36fe8e077f07', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/bluez/bluez/commit/1cd644db8c23a2f530ddb93cebed7dacc5f5721a', 'name': 'https://github.com/bluez/bluez/commit/1cd644db8c23a2f530ddb93cebed7dacc5f5721a', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://lists.debian.org/debian-lts-announce/2020/10/msg00022.html', 'name': '[debian-lts-announce] 20201021 [SECURITY] [DLA 2410-1] bluez security update', 'refsource': 'MLIST', 'tags': ['Mailing List', 'Third Party Advisory']}, {'url': 'https://security.gentoo.org/glsa/202011-01', 'name': 'GLSA-202011-01', 'refsource': 'GENTOO', 'tags': ['Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-11/msg00034.html', 'name': 'openSUSE-SU-2020:1876', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-11/msg00036.html', 'name': 'openSUSE-SU-2020:1880', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}, {'url': 'https://www.debian.org/security/2021/dsa-4951', 'name': 'DSA-4951', 'refsource': 'DEBIAN', 'tags': ['Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-415'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:bluez:bluez:*:*:*:*:*:*:*:*', 'versionEndExcluding': '5.55', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.1:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.2:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'In BlueZ before 5.55, a double free was found in the gatttool disconnect_cb() routine from shared/att.c. A remote attacker could potentially cause a denial of service or code execution, during service discovery, due to a redundant disconnect MGMT event.'}] | 2022-04-05T15:59Z | 2020-10-15T03:15Z | Double Free | The product calls free() twice on the same memory address, potentially leading to modification of unexpected memory locations. | When a program calls free() twice with the same argument, the program's memory management data structures become corrupted. This corruption can cause the program to crash or, in some circumstances, cause two later calls to malloc() to return the same pointer. If malloc() returns the same value twice and the program later gives the attacker control over the data that is written into this doubly-allocated memory, the program becomes vulnerable to a buffer overflow attack.
| https://cwe.mitre.org/data/definitions/415.html | 0 | Luiz Augusto von Dentz | 2020-07-15 18:25:37-07:00 | shared/att: Fix possible crash on disconnect
If there are pending request while disconnecting they would be notified
but clients may endup being freed in the proccess which will then be
calling bt_att_cancel to cancal its requests causing the following
trace:
Invalid read of size 4
at 0x1D894C: enable_ccc_callback (gatt-client.c:1627)
by 0x1D247B: disc_att_send_op (att.c:417)
by 0x1CCC17: queue_remove_all (queue.c:354)
by 0x1D47B7: disconnect_cb (att.c:635)
by 0x1E0707: watch_callback (io-glib.c:170)
by 0x48E963B: g_main_context_dispatch (in /usr/lib/libglib-2.0.so.0.6400.4)
by 0x48E9AC7: ??? (in /usr/lib/libglib-2.0.so.0.6400.4)
by 0x48E9ECF: g_main_loop_run (in /usr/lib/libglib-2.0.so.0.6400.4)
by 0x1E0E97: mainloop_run (mainloop-glib.c:79)
by 0x1E13B3: mainloop_run_with_signal (mainloop-notify.c:201)
by 0x12BC3B: main (main.c:770)
Address 0x7d40a28 is 24 bytes inside a block of size 32 free'd
at 0x484A2E0: free (vg_replace_malloc.c:540)
by 0x1CCC17: queue_remove_all (queue.c:354)
by 0x1CCC83: queue_destroy (queue.c:73)
by 0x1D7DD7: bt_gatt_client_free (gatt-client.c:2209)
by 0x16497B: batt_free (battery.c:77)
by 0x16497B: batt_remove (battery.c:286)
by 0x1A0013: service_remove (service.c:176)
by 0x1A9B7B: device_remove_gatt_service (device.c:3691)
by 0x1A9B7B: gatt_service_removed (device.c:3805)
by 0x1CC90B: queue_foreach (queue.c:220)
by 0x1DE27B: notify_service_changed.isra.0.part.0 (gatt-db.c:369)
by 0x1DE387: notify_service_changed (gatt-db.c:361)
by 0x1DE387: gatt_db_service_destroy (gatt-db.c:385)
by 0x1DE3EF: gatt_db_remove_service (gatt-db.c:519)
by 0x1D674F: discovery_op_complete (gatt-client.c:388)
by 0x1D6877: discover_primary_cb (gatt-client.c:1260)
by 0x1E220B: discovery_op_complete (gatt-helpers.c:628)
by 0x1E249B: read_by_grp_type_cb (gatt-helpers.c:730)
by 0x1D247B: disc_att_send_op (att.c:417)
by 0x1CCC17: queue_remove_all (queue.c:354)
by 0x1D47B7: disconnect_cb (att.c:635) | 1cd644db8c23a2f530ddb93cebed7dacc5f5721a | False | bluez/bluez | Main BlueZ tree | 2019-11-05 10:32:32 | 2022-08-24 12:13:09 | null | bluez | 327.0 | 143.0 | disconnect_cb | disconnect_cb( struct io * io , void * user_data) | ['io', 'user_data'] | static bool disconnect_cb(struct io *io, void *user_data)
{
struct bt_att_chan *chan = user_data;
struct bt_att *att = chan->att;
int err;
socklen_t len;
len = sizeof(err);
if (getsockopt(chan->fd, SOL_SOCKET, SO_ERROR, &err, &len) < 0) {
util_debug(chan->att->debug_callback, chan->att->debug_data,
"(chan %p) Failed to obtain disconnect"
" error: %s", chan, strerror(errno));
err = 0;
}
util_debug(chan->att->debug_callback, chan->att->debug_data,
"Channel %p disconnected: %s",
chan, strerror(err));
/* Dettach channel */
queue_remove(att->chans, chan);
/* Notify request callbacks */
queue_remove_all(att->req_queue, NULL, NULL, disc_att_send_op);
queue_remove_all(att->ind_queue, NULL, NULL, disc_att_send_op);
queue_remove_all(att->write_queue, NULL, NULL, disc_att_send_op);
if (chan->pending_req) {
disc_att_send_op(chan->pending_req);
chan->pending_req = NULL;
}
if (chan->pending_ind) {
disc_att_send_op(chan->pending_ind);
chan->pending_ind = NULL;
}
bt_att_chan_free(chan);
/* Don't run disconnect callback if there are channels left */
if (!queue_isempty(att->chans))
return false;
bt_att_ref(att);
queue_foreach(att->disconn_list, disconn_handler, INT_TO_PTR(err));
bt_att_unregister_all(att);
bt_att_unref(att);
return false;
} | 258 | True | 1 |
CVE-2021-3658 | False | False | False | False | AV:A/AC:L/Au:N/C:P/I:N/A:N | ADJACENT_NETWORK | LOW | NONE | PARTIAL | NONE | NONE | 3.3 | CVSS:3.1/AV:A/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N | ADJACENT_NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | NONE | NONE | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://gitlab.gnome.org/GNOME/gnome-bluetooth/-/issues/89', 'name': 'https://gitlab.gnome.org/GNOME/gnome-bluetooth/-/issues/89', 'refsource': 'MISC', 'tags': ['Issue Tracking', 'Patch', 'Third Party Advisory']}, {'url': 'https://git.kernel.org/pub/scm/bluetooth/bluez.git/commit/?id=b497b5942a8beb8f89ca1c359c54ad67ec843055', 'name': 'https://git.kernel.org/pub/scm/bluetooth/bluez.git/commit/?id=b497b5942a8beb8f89ca1c359c54ad67ec843055', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://bugzilla.redhat.com/show_bug.cgi?id=1984728', 'name': 'https://bugzilla.redhat.com/show_bug.cgi?id=1984728', 'refsource': 'MISC', 'tags': ['Issue Tracking', 'Patch', 'Third Party Advisory']}, {'url': 'https://github.com/bluez/bluez/commit/b497b5942a8beb8f89ca1c359c54ad67ec843055', 'name': 'https://github.com/bluez/bluez/commit/b497b5942a8beb8f89ca1c359c54ad67ec843055', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://security.netapp.com/advisory/ntap-20220407-0002/', 'name': 'https://security.netapp.com/advisory/ntap-20220407-0002/', 'refsource': 'CONFIRM', 'tags': ['Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-863'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:bluez:bluez:*:*:*:*:*:*:*:*', 'versionEndExcluding': '5.61', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:fedoraproject:fedora:34:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "bluetoothd from bluez incorrectly saves adapters' Discoverable status when a device is powered down, and restores it when powered up. If a device is powered down while discoverable, it will be discoverable when powered on again. This could lead to inadvertent exposure of the bluetooth stack to physically nearby attackers."}] | 2022-06-03T16:22Z | 2022-03-02T23:15Z | Incorrect Authorization | The software performs an authorization check when an actor attempts to access a resource or perform an action, but it does not correctly perform the check. This allows attackers to bypass intended access restrictions. |
Assuming a user with a given identity, authorization is the process of determining whether that user can access a given resource, based on the user's privileges and any permissions or other access-control specifications that apply to the resource.
When access control checks are incorrectly applied, users are able to access data or perform actions that they should not be allowed to perform. This can lead to a wide range of problems, including information exposures, denial of service, and arbitrary code execution.
| https://cwe.mitre.org/data/definitions/863.html | 0 | Luiz Augusto von Dentz | 2021-06-24 16:32:04-07:00 | adapter: Fix storing discoverable setting
discoverable setting shall only be store when changed via Discoverable
property and not when discovery client set it as that be considered
temporary just for the lifetime of the discovery. | b497b5942a8beb8f89ca1c359c54ad67ec843055 | False | bluez/bluez | Main BlueZ tree | 2019-11-05 10:32:32 | 2022-08-24 12:13:09 | null | bluez | 327.0 | 143.0 | discovery_stop | discovery_stop( struct discovery_client * client) | ['client'] | static int discovery_stop(struct discovery_client *client)
{
struct btd_adapter *adapter = client->adapter;
struct mgmt_cp_stop_discovery cp;
/* Check if there are more client discovering */
if (g_slist_next(adapter->discovery_list)) {
discovery_remove(client);
update_discovery_filter(adapter);
return 0;
}
if (adapter->discovery_discoverable)
set_discovery_discoverable(adapter, false);
/*
* In the idle phase of a discovery, there is no need to stop it
* and so it is enough to send out the signal and just return.
*/
if (adapter->discovery_enable == 0x00) {
discovery_remove(client);
adapter->discovering = false;
g_dbus_emit_property_changed(dbus_conn, adapter->path,
ADAPTER_INTERFACE, "Discovering");
trigger_passive_scanning(adapter);
return 0;
}
cp.type = adapter->discovery_type;
adapter->client = client;
mgmt_send(adapter->mgmt, MGMT_OP_STOP_DISCOVERY,
adapter->dev_id, sizeof(cp), &cp,
stop_discovery_complete, adapter, NULL);
return -EINPROGRESS;
} | 146 | True | 1 |
CVE-2021-3658 | False | False | False | False | AV:A/AC:L/Au:N/C:P/I:N/A:N | ADJACENT_NETWORK | LOW | NONE | PARTIAL | NONE | NONE | 3.3 | CVSS:3.1/AV:A/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N | ADJACENT_NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | NONE | NONE | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://gitlab.gnome.org/GNOME/gnome-bluetooth/-/issues/89', 'name': 'https://gitlab.gnome.org/GNOME/gnome-bluetooth/-/issues/89', 'refsource': 'MISC', 'tags': ['Issue Tracking', 'Patch', 'Third Party Advisory']}, {'url': 'https://git.kernel.org/pub/scm/bluetooth/bluez.git/commit/?id=b497b5942a8beb8f89ca1c359c54ad67ec843055', 'name': 'https://git.kernel.org/pub/scm/bluetooth/bluez.git/commit/?id=b497b5942a8beb8f89ca1c359c54ad67ec843055', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://bugzilla.redhat.com/show_bug.cgi?id=1984728', 'name': 'https://bugzilla.redhat.com/show_bug.cgi?id=1984728', 'refsource': 'MISC', 'tags': ['Issue Tracking', 'Patch', 'Third Party Advisory']}, {'url': 'https://github.com/bluez/bluez/commit/b497b5942a8beb8f89ca1c359c54ad67ec843055', 'name': 'https://github.com/bluez/bluez/commit/b497b5942a8beb8f89ca1c359c54ad67ec843055', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://security.netapp.com/advisory/ntap-20220407-0002/', 'name': 'https://security.netapp.com/advisory/ntap-20220407-0002/', 'refsource': 'CONFIRM', 'tags': ['Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-863'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:bluez:bluez:*:*:*:*:*:*:*:*', 'versionEndExcluding': '5.61', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:fedoraproject:fedora:34:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "bluetoothd from bluez incorrectly saves adapters' Discoverable status when a device is powered down, and restores it when powered up. If a device is powered down while discoverable, it will be discoverable when powered on again. This could lead to inadvertent exposure of the bluetooth stack to physically nearby attackers."}] | 2022-06-03T16:22Z | 2022-03-02T23:15Z | Incorrect Authorization | The software performs an authorization check when an actor attempts to access a resource or perform an action, but it does not correctly perform the check. This allows attackers to bypass intended access restrictions. |
Assuming a user with a given identity, authorization is the process of determining whether that user can access a given resource, based on the user's privileges and any permissions or other access-control specifications that apply to the resource.
When access control checks are incorrectly applied, users are able to access data or perform actions that they should not be allowed to perform. This can lead to a wide range of problems, including information exposures, denial of service, and arbitrary code execution.
| https://cwe.mitre.org/data/definitions/863.html | 0 | Luiz Augusto von Dentz | 2021-06-24 16:32:04-07:00 | adapter: Fix storing discoverable setting
discoverable setting shall only be store when changed via Discoverable
property and not when discovery client set it as that be considered
temporary just for the lifetime of the discovery. | b497b5942a8beb8f89ca1c359c54ad67ec843055 | False | bluez/bluez | Main BlueZ tree | 2019-11-05 10:32:32 | 2022-08-24 12:13:09 | null | bluez | 327.0 | 143.0 | settings_changed | settings_changed( struct btd_adapter * adapter , uint32_t settings) | ['adapter', 'settings'] | static void settings_changed(struct btd_adapter *adapter, uint32_t settings)
{
uint32_t changed_mask;
changed_mask = adapter->current_settings ^ settings;
adapter->current_settings = settings;
adapter->pending_settings &= ~changed_mask;
DBG("Changed settings: 0x%08x", changed_mask);
DBG("Pending settings: 0x%08x", adapter->pending_settings);
if (changed_mask & MGMT_SETTING_POWERED) {
g_dbus_emit_property_changed(dbus_conn, adapter->path,
ADAPTER_INTERFACE, "Powered");
if (adapter->current_settings & MGMT_SETTING_POWERED) {
adapter_start(adapter);
} else {
adapter_stop(adapter);
if (powering_down) {
adapter_remaining--;
if (!adapter_remaining)
btd_exit();
}
}
}
if ((changed_mask & MGMT_SETTING_LE) &&
btd_adapter_get_powered(adapter) &&
(adapter->current_settings & MGMT_SETTING_LE))
trigger_passive_scanning(adapter);
if (changed_mask & MGMT_SETTING_DISCOVERABLE) {
g_dbus_emit_property_changed(dbus_conn, adapter->path,
ADAPTER_INTERFACE, "Discoverable");
store_adapter_info(adapter);
btd_adv_manager_refresh(adapter->adv_manager);
}
if (changed_mask & MGMT_SETTING_BONDABLE) {
g_dbus_emit_property_changed(dbus_conn, adapter->path,
ADAPTER_INTERFACE, "Pairable");
trigger_pairable_timeout(adapter);
}
} | 198 | True | 1 |
CVE-2021-3658 | False | False | False | False | AV:A/AC:L/Au:N/C:P/I:N/A:N | ADJACENT_NETWORK | LOW | NONE | PARTIAL | NONE | NONE | 3.3 | CVSS:3.1/AV:A/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N | ADJACENT_NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | NONE | NONE | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://gitlab.gnome.org/GNOME/gnome-bluetooth/-/issues/89', 'name': 'https://gitlab.gnome.org/GNOME/gnome-bluetooth/-/issues/89', 'refsource': 'MISC', 'tags': ['Issue Tracking', 'Patch', 'Third Party Advisory']}, {'url': 'https://git.kernel.org/pub/scm/bluetooth/bluez.git/commit/?id=b497b5942a8beb8f89ca1c359c54ad67ec843055', 'name': 'https://git.kernel.org/pub/scm/bluetooth/bluez.git/commit/?id=b497b5942a8beb8f89ca1c359c54ad67ec843055', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://bugzilla.redhat.com/show_bug.cgi?id=1984728', 'name': 'https://bugzilla.redhat.com/show_bug.cgi?id=1984728', 'refsource': 'MISC', 'tags': ['Issue Tracking', 'Patch', 'Third Party Advisory']}, {'url': 'https://github.com/bluez/bluez/commit/b497b5942a8beb8f89ca1c359c54ad67ec843055', 'name': 'https://github.com/bluez/bluez/commit/b497b5942a8beb8f89ca1c359c54ad67ec843055', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://security.netapp.com/advisory/ntap-20220407-0002/', 'name': 'https://security.netapp.com/advisory/ntap-20220407-0002/', 'refsource': 'CONFIRM', 'tags': ['Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-863'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:bluez:bluez:*:*:*:*:*:*:*:*', 'versionEndExcluding': '5.61', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:fedoraproject:fedora:34:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "bluetoothd from bluez incorrectly saves adapters' Discoverable status when a device is powered down, and restores it when powered up. If a device is powered down while discoverable, it will be discoverable when powered on again. This could lead to inadvertent exposure of the bluetooth stack to physically nearby attackers."}] | 2022-06-03T16:22Z | 2022-03-02T23:15Z | Incorrect Authorization | The software performs an authorization check when an actor attempts to access a resource or perform an action, but it does not correctly perform the check. This allows attackers to bypass intended access restrictions. |
Assuming a user with a given identity, authorization is the process of determining whether that user can access a given resource, based on the user's privileges and any permissions or other access-control specifications that apply to the resource.
When access control checks are incorrectly applied, users are able to access data or perform actions that they should not be allowed to perform. This can lead to a wide range of problems, including information exposures, denial of service, and arbitrary code execution.
| https://cwe.mitre.org/data/definitions/863.html | 0 | Luiz Augusto von Dentz | 2021-06-24 16:32:04-07:00 | adapter: Fix storing discoverable setting
discoverable setting shall only be store when changed via Discoverable
property and not when discovery client set it as that be considered
temporary just for the lifetime of the discovery. | b497b5942a8beb8f89ca1c359c54ad67ec843055 | False | bluez/bluez | Main BlueZ tree | 2019-11-05 10:32:32 | 2022-08-24 12:13:09 | null | bluez | 327.0 | 143.0 | update_discovery_filter | update_discovery_filter( struct btd_adapter * adapter) | ['adapter'] | static int update_discovery_filter(struct btd_adapter *adapter)
{
struct mgmt_cp_start_service_discovery *sd_cp;
GSList *l;
DBG("");
if (discovery_filter_to_mgmt_cp(adapter, &sd_cp)) {
btd_error(adapter->dev_id,
"discovery_filter_to_mgmt_cp returned error");
return -ENOMEM;
}
for (l = adapter->discovery_list; l; l = g_slist_next(l)) {
struct discovery_client *client = l->data;
if (!client->discovery_filter)
continue;
if (client->discovery_filter->discoverable)
break;
}
set_discovery_discoverable(adapter, l ? true : false);
/*
* If filters are equal, then don't update scan, except for when
* starting discovery.
*/
if (filters_equal(adapter->current_discovery_filter, sd_cp) &&
adapter->discovering != 0) {
DBG("filters were equal, deciding to not restart the scan.");
g_free(sd_cp);
return 0;
}
g_free(adapter->current_discovery_filter);
adapter->current_discovery_filter = sd_cp;
trigger_start_discovery(adapter, 0);
return -EINPROGRESS;
} | 162 | True | 1 |
CVE-2020-27208 | False | False | False | False | AV:L/AC:L/Au:N/C:P/I:P/A:P | LOCAL | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 4.6 | CVSS:3.1/AV:P/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | PHYSICAL | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 6.8 | MEDIUM | 0.9 | 5.9 | False | [{'url': 'https://twitter.com/SoloKeysSec', 'name': 'https://twitter.com/SoloKeysSec', 'refsource': 'MISC', 'tags': ['Product']}, {'url': 'https://solokeys.com', 'name': 'https://solokeys.com', 'refsource': 'MISC', 'tags': ['Product']}, {'url': 'https://eprint.iacr.org/2021/640', 'name': 'https://eprint.iacr.org/2021/640', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://www.aisec.fraunhofer.de/en/FirmwareProtection.html', 'name': 'https://www.aisec.fraunhofer.de/en/FirmwareProtection.html', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/solokeys/solo/commit/a9c02cd354f34b48195a342c7f524abdef5cbcec', 'name': 'https://github.com/solokeys/solo/commit/a9c02cd354f34b48195a342c7f524abdef5cbcec', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://www.aisec.fraunhofer.de/de/das-institut/wissenschaftliche-exzellenz/security-and-trust-in-open-source-security-tokens.html', 'name': 'https://www.aisec.fraunhofer.de/de/das-institut/wissenschaftliche-exzellenz/security-and-trust-in-open-source-security-tokens.html', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-326'}]}] | MEDIUM | [{'operator': 'AND', 'children': [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:solokeys:solo_firmware:4.0.0:*:*:*:*:*:*:*', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': False, 'cpe23Uri': 'cpe:2.3:h:solokeys:solo:-:*:*:*:*:*:*:*', 'cpe_name': []}]}], 'cpe_match': []}, {'operator': 'AND', 'children': [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:solokeys:somu_firmware:-:*:*:*:*:*:*:*', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': False, 'cpe23Uri': 'cpe:2.3:h:solokeys:somu:-:*:*:*:*:*:*:*', 'cpe_name': []}]}], 'cpe_match': []}, {'operator': 'AND', 'children': [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:nitrokey:fido2_firmware:-:*:*:*:*:*:*:*', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': False, 'cpe23Uri': 'cpe:2.3:h:nitrokey:fido2:-:*:*:*:*:*:*:*', 'cpe_name': []}]}], 'cpe_match': []}] | [{'lang': 'en', 'value': 'The flash read-out protection (RDP) level is not enforced during the device initialization phase of the SoloKeys Solo 4.0.0 & Somu and the Nitrokey FIDO2 token. This allows an adversary to downgrade the RDP level and access secrets such as private ECC keys from SRAM via the debug interface.'}] | 2021-05-28T15:41Z | 2021-05-21T12:15Z | Inadequate Encryption Strength | The software stores or transmits sensitive data using an encryption scheme that is theoretically sound, but is not strong enough for the level of protection required. | A weak encryption scheme can be subjected to brute force attacks that have a reasonable chance of succeeding using current attack methods and resources.
| https://cwe.mitre.org/data/definitions/326.html | 0 | Conor Patrick | 2021-01-27 19:47:45-08:00 | patches to improve resistance to fault injection | a9c02cd354f34b48195a342c7f524abdef5cbcec | False | visit repo url | visit repo url | visit repo url | visit repo url | visit repo url | solokeys | visit repo url | visit repo url | device_init | device_init() | [] | void device_init()
{
hw_init(LOW_FREQUENCY);
if (! tsc_sensor_exists())
{
_NFC_status = nfc_init();
}
if (_NFC_status == NFC_IS_ACTIVE)
{
printf1(TAG_NFC, "Have NFC\r\n");
isLowFreq = 1;
IS_BUTTON_PRESSED = is_physical_button_pressed;
}
else
{
printf1(TAG_NFC, "Have NO NFC\r\n");
hw_init(HIGH_FREQUENCY);
isLowFreq = 0;
device_init_button();
}
usbhid_init();
ctaphid_init();
ctap_init();
device_migrate();
#if BOOT_TO_DFU
flash_option_bytes_init(1);
#else
flash_option_bytes_init(0);
#endif
} | 97 | True | 1 |
CVE-2020-27208 | False | False | False | False | AV:L/AC:L/Au:N/C:P/I:P/A:P | LOCAL | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 4.6 | CVSS:3.1/AV:P/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | PHYSICAL | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 6.8 | MEDIUM | 0.9 | 5.9 | False | [{'url': 'https://twitter.com/SoloKeysSec', 'name': 'https://twitter.com/SoloKeysSec', 'refsource': 'MISC', 'tags': ['Product']}, {'url': 'https://solokeys.com', 'name': 'https://solokeys.com', 'refsource': 'MISC', 'tags': ['Product']}, {'url': 'https://eprint.iacr.org/2021/640', 'name': 'https://eprint.iacr.org/2021/640', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://www.aisec.fraunhofer.de/en/FirmwareProtection.html', 'name': 'https://www.aisec.fraunhofer.de/en/FirmwareProtection.html', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/solokeys/solo/commit/a9c02cd354f34b48195a342c7f524abdef5cbcec', 'name': 'https://github.com/solokeys/solo/commit/a9c02cd354f34b48195a342c7f524abdef5cbcec', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://www.aisec.fraunhofer.de/de/das-institut/wissenschaftliche-exzellenz/security-and-trust-in-open-source-security-tokens.html', 'name': 'https://www.aisec.fraunhofer.de/de/das-institut/wissenschaftliche-exzellenz/security-and-trust-in-open-source-security-tokens.html', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-326'}]}] | MEDIUM | [{'operator': 'AND', 'children': [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:solokeys:solo_firmware:4.0.0:*:*:*:*:*:*:*', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': False, 'cpe23Uri': 'cpe:2.3:h:solokeys:solo:-:*:*:*:*:*:*:*', 'cpe_name': []}]}], 'cpe_match': []}, {'operator': 'AND', 'children': [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:solokeys:somu_firmware:-:*:*:*:*:*:*:*', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': False, 'cpe23Uri': 'cpe:2.3:h:solokeys:somu:-:*:*:*:*:*:*:*', 'cpe_name': []}]}], 'cpe_match': []}, {'operator': 'AND', 'children': [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:nitrokey:fido2_firmware:-:*:*:*:*:*:*:*', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': False, 'cpe23Uri': 'cpe:2.3:h:nitrokey:fido2:-:*:*:*:*:*:*:*', 'cpe_name': []}]}], 'cpe_match': []}] | [{'lang': 'en', 'value': 'The flash read-out protection (RDP) level is not enforced during the device initialization phase of the SoloKeys Solo 4.0.0 & Somu and the Nitrokey FIDO2 token. This allows an adversary to downgrade the RDP level and access secrets such as private ECC keys from SRAM via the debug interface.'}] | 2021-05-28T15:41Z | 2021-05-21T12:15Z | Inadequate Encryption Strength | The software stores or transmits sensitive data using an encryption scheme that is theoretically sound, but is not strong enough for the level of protection required. | A weak encryption scheme can be subjected to brute force attacks that have a reasonable chance of succeeding using current attack methods and resources.
| https://cwe.mitre.org/data/definitions/326.html | 0 | Conor Patrick | 2021-01-27 19:47:45-08:00 | patches to improve resistance to fault injection | a9c02cd354f34b48195a342c7f524abdef5cbcec | False | visit repo url | visit repo url | visit repo url | visit repo url | visit repo url | solokeys | visit repo url | visit repo url | flash_option_bytes_init | flash_option_bytes_init( int boot_from_dfu) | ['boot_from_dfu'] | void flash_option_bytes_init(int boot_from_dfu)
{
uint32_t val = 0xfffff8aa;
if (boot_from_dfu){
val &= ~(1<<27); // nBOOT0 = 0 (boot from system rom)
}
else {
if (solo_is_locked())
{
val = 0xfffff8cc;
}
}
val &= ~(1<<26); // nSWBOOT0 = 0 (boot from nBoot0)
val &= ~(1<<25); // SRAM2_RST = 1 (erase sram on reset)
val &= ~(1<<24); // SRAM2_PE = 1 (parity check en)
if (FLASH->OPTR == val)
{
return;
}
__disable_irq();
while (FLASH->SR & (1<<16))
;
flash_unlock();
if (FLASH->CR & (1<<30))
{
FLASH->OPTKEYR = 0x08192A3B;
FLASH->OPTKEYR = 0x4C5D6E7F;
}
FLASH->OPTR =val;
FLASH->CR |= (1<<17);
while (FLASH->SR & (1<<16))
;
flash_lock();
__enable_irq();
} | 169 | True | 1 |
CVE-2020-27209 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:N/A:N | NETWORK | LOW | NONE | PARTIAL | NONE | NONE | 5.0 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | NONE | NONE | 7.5 | HIGH | 3.9 | 3.6 | False | [{'url': 'https://github.com/kmackay/micro-ecc/releases', 'name': 'https://github.com/kmackay/micro-ecc/releases', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://eprint.iacr.org/2021/640', 'name': 'https://eprint.iacr.org/2021/640', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://www.aisec.fraunhofer.de/en/FirmwareProtection.html', 'name': 'https://www.aisec.fraunhofer.de/en/FirmwareProtection.html', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/kmackay/micro-ecc/commit/1b5f5cea5145c96dd8791b9b2c41424fc74c2172', 'name': 'https://github.com/kmackay/micro-ecc/commit/1b5f5cea5145c96dd8791b9b2c41424fc74c2172', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://www.aisec.fraunhofer.de/de/das-institut/wissenschaftliche-exzellenz/security-and-trust-in-open-source-security-tokens.html', 'name': 'https://www.aisec.fraunhofer.de/de/das-institut/wissenschaftliche-exzellenz/security-and-trust-in-open-source-security-tokens.html', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'NVD-CWE-noinfo'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:micro-ecc_project:micro-ecc:1.0:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'The ECDSA operation of the micro-ecc library 1.0 is vulnerable to simple power analysis attacks which allows an adversary to extract the private ECC key.'}] | 2021-05-27T18:21Z | 2021-05-20T21:15Z | Insufficient Information | There is insufficient information about the issue to classify it; details are unkown or unspecified. | Insufficient Information | https://nvd.nist.gov/vuln/categories | 0 | Ken MacKay | 2020-10-07 10:47:40-07:00 | Fix for #168 | 1b5f5cea5145c96dd8791b9b2c41424fc74c2172 | False | kmackay/micro-ecc | ECDH and ECDSA for 8-bit, 32-bit, and 64-bit processors. | 2013-05-05 16:29:20 | 2022-03-28 21:56:06 | kmackay | 1022.0 | 387.0 | bits2int | bits2int( uECC_word_t * native , const uint8_t * bits , unsigned bits_size , uECC_Curve curve) | ['native', 'bits', 'bits_size', 'curve'] | static void bits2int(uECC_word_t *native,
const uint8_t *bits,
unsigned bits_size,
uECC_Curve curve) {
unsigned num_n_bytes = BITS_TO_BYTES(curve->num_n_bits);
unsigned num_n_words = BITS_TO_WORDS(curve->num_n_bits);
int shift;
uECC_word_t carry;
uECC_word_t *ptr;
if (bits_size > num_n_bytes) {
bits_size = num_n_bytes;
}
uECC_vli_clear(native, num_n_words);
#if uECC_VLI_NATIVE_LITTLE_ENDIAN
bcopy((uint8_t *) native, bits, bits_size);
#else
uECC_vli_bytesToNative(native, bits, bits_size);
#endif
if (bits_size * 8 <= (unsigned)curve->num_n_bits) {
return;
}
shift = bits_size * 8 - curve->num_n_bits;
carry = 0;
ptr = native + num_n_words;
while (ptr-- > native) {
uECC_word_t temp = *ptr;
*ptr = (temp >> shift) | carry;
carry = temp << (uECC_WORD_BITS - shift);
}
/* Reduce mod curve_n */
if (uECC_vli_cmp_unsafe(curve->n, native, num_n_words) != 1) {
uECC_vli_sub(native, native, curve->n, num_n_words);
}
} | 195 | True | 1 |
|
CVE-2020-27209 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:N/A:N | NETWORK | LOW | NONE | PARTIAL | NONE | NONE | 5.0 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | NONE | NONE | 7.5 | HIGH | 3.9 | 3.6 | False | [{'url': 'https://github.com/kmackay/micro-ecc/releases', 'name': 'https://github.com/kmackay/micro-ecc/releases', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://eprint.iacr.org/2021/640', 'name': 'https://eprint.iacr.org/2021/640', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://www.aisec.fraunhofer.de/en/FirmwareProtection.html', 'name': 'https://www.aisec.fraunhofer.de/en/FirmwareProtection.html', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/kmackay/micro-ecc/commit/1b5f5cea5145c96dd8791b9b2c41424fc74c2172', 'name': 'https://github.com/kmackay/micro-ecc/commit/1b5f5cea5145c96dd8791b9b2c41424fc74c2172', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://www.aisec.fraunhofer.de/de/das-institut/wissenschaftliche-exzellenz/security-and-trust-in-open-source-security-tokens.html', 'name': 'https://www.aisec.fraunhofer.de/de/das-institut/wissenschaftliche-exzellenz/security-and-trust-in-open-source-security-tokens.html', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'NVD-CWE-noinfo'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:micro-ecc_project:micro-ecc:1.0:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'The ECDSA operation of the micro-ecc library 1.0 is vulnerable to simple power analysis attacks which allows an adversary to extract the private ECC key.'}] | 2021-05-27T18:21Z | 2021-05-20T21:15Z | Insufficient Information | There is insufficient information about the issue to classify it; details are unkown or unspecified. | Insufficient Information | https://nvd.nist.gov/vuln/categories | 0 | Ken MacKay | 2020-10-07 10:47:40-07:00 | Fix for #168 | 1b5f5cea5145c96dd8791b9b2c41424fc74c2172 | False | kmackay/micro-ecc | ECDH and ECDSA for 8-bit, 32-bit, and 64-bit processors. | 2013-05-05 16:29:20 | 2022-03-28 21:56:06 | kmackay | 1022.0 | 387.0 | uECC_sign_with_k | uECC_sign_with_k( const uint8_t * private_key , const uint8_t * message_hash , unsigned hash_size , uECC_word_t * k , uint8_t * signature , uECC_Curve curve) | ['private_key', 'message_hash', 'hash_size', 'k', 'signature', 'curve'] | static int uECC_sign_with_k(const uint8_t *private_key,
const uint8_t *message_hash,
unsigned hash_size,
uECC_word_t *k,
uint8_t *signature,
uECC_Curve curve) {
uECC_word_t tmp[uECC_MAX_WORDS];
uECC_word_t s[uECC_MAX_WORDS];
uECC_word_t *k2[2] = {tmp, s};
#if uECC_VLI_NATIVE_LITTLE_ENDIAN
uECC_word_t *p = (uECC_word_t *)signature;
#else
uECC_word_t p[uECC_MAX_WORDS * 2];
#endif
uECC_word_t carry;
wordcount_t num_words = curve->num_words;
wordcount_t num_n_words = BITS_TO_WORDS(curve->num_n_bits);
bitcount_t num_n_bits = curve->num_n_bits;
/* Make sure 0 < k < curve_n */
if (uECC_vli_isZero(k, num_words) || uECC_vli_cmp(curve->n, k, num_n_words) != 1) {
return 0;
}
carry = regularize_k(k, tmp, s, curve);
EccPoint_mult(p, curve->G, k2[!carry], 0, num_n_bits + 1, curve);
if (uECC_vli_isZero(p, num_words)) {
return 0;
}
/* If an RNG function was specified, get a random number
to prevent side channel analysis of k. */
if (!g_rng_function) {
uECC_vli_clear(tmp, num_n_words);
tmp[0] = 1;
} else if (!uECC_generate_random_int(tmp, curve->n, num_n_words)) {
return 0;
}
/* Prevent side channel analysis of uECC_vli_modInv() to determine
bits of k / the private key by premultiplying by a random number */
uECC_vli_modMult(k, k, tmp, curve->n, num_n_words); /* k' = rand * k */
uECC_vli_modInv(k, k, curve->n, num_n_words); /* k = 1 / k' */
uECC_vli_modMult(k, k, tmp, curve->n, num_n_words); /* k = 1 / k */
#if uECC_VLI_NATIVE_LITTLE_ENDIAN == 0
uECC_vli_nativeToBytes(signature, curve->num_bytes, p); /* store r */
#endif
#if uECC_VLI_NATIVE_LITTLE_ENDIAN
bcopy((uint8_t *) tmp, private_key, BITS_TO_BYTES(curve->num_n_bits));
#else
uECC_vli_bytesToNative(tmp, private_key, BITS_TO_BYTES(curve->num_n_bits)); /* tmp = d */
#endif
s[num_n_words - 1] = 0;
uECC_vli_set(s, p, num_words);
uECC_vli_modMult(s, tmp, s, curve->n, num_n_words); /* s = r*d */
bits2int(tmp, message_hash, hash_size, curve);
uECC_vli_modAdd(s, tmp, s, curve->n, num_n_words); /* s = e + r*d */
uECC_vli_modMult(s, s, k, curve->n, num_n_words); /* s = (e + r*d) / k */
if (uECC_vli_numBits(s, num_n_words) > (bitcount_t)curve->num_bytes * 8) {
return 0;
}
#if uECC_VLI_NATIVE_LITTLE_ENDIAN
bcopy((uint8_t *) signature + curve->num_bytes, (uint8_t *) s, curve->num_bytes);
#else
uECC_vli_nativeToBytes(signature + curve->num_bytes, curve->num_bytes, s);
#endif
return 1;
} | 440 | True | 1 |
|
CVE-2020-27209 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:N/A:N | NETWORK | LOW | NONE | PARTIAL | NONE | NONE | 5.0 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | NONE | NONE | 7.5 | HIGH | 3.9 | 3.6 | False | [{'url': 'https://github.com/kmackay/micro-ecc/releases', 'name': 'https://github.com/kmackay/micro-ecc/releases', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://eprint.iacr.org/2021/640', 'name': 'https://eprint.iacr.org/2021/640', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://www.aisec.fraunhofer.de/en/FirmwareProtection.html', 'name': 'https://www.aisec.fraunhofer.de/en/FirmwareProtection.html', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/kmackay/micro-ecc/commit/1b5f5cea5145c96dd8791b9b2c41424fc74c2172', 'name': 'https://github.com/kmackay/micro-ecc/commit/1b5f5cea5145c96dd8791b9b2c41424fc74c2172', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://www.aisec.fraunhofer.de/de/das-institut/wissenschaftliche-exzellenz/security-and-trust-in-open-source-security-tokens.html', 'name': 'https://www.aisec.fraunhofer.de/de/das-institut/wissenschaftliche-exzellenz/security-and-trust-in-open-source-security-tokens.html', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'NVD-CWE-noinfo'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:micro-ecc_project:micro-ecc:1.0:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'The ECDSA operation of the micro-ecc library 1.0 is vulnerable to simple power analysis attacks which allows an adversary to extract the private ECC key.'}] | 2021-05-27T18:21Z | 2021-05-20T21:15Z | Insufficient Information | There is insufficient information about the issue to classify it; details are unkown or unspecified. | Insufficient Information | https://nvd.nist.gov/vuln/categories | 0 | Ken MacKay | 2020-10-07 10:47:40-07:00 | Fix for #168 | 1b5f5cea5145c96dd8791b9b2c41424fc74c2172 | False | kmackay/micro-ecc | ECDH and ECDSA for 8-bit, 32-bit, and 64-bit processors. | 2013-05-05 16:29:20 | 2022-03-28 21:56:06 | kmackay | 1022.0 | 387.0 | uECC_verify | uECC_verify( const uint8_t * public_key , const uint8_t * message_hash , unsigned hash_size , const uint8_t * signature , uECC_Curve curve) | ['public_key', 'message_hash', 'hash_size', 'signature', 'curve'] | int uECC_verify(const uint8_t *public_key,
const uint8_t *message_hash,
unsigned hash_size,
const uint8_t *signature,
uECC_Curve curve) {
uECC_word_t u1[uECC_MAX_WORDS], u2[uECC_MAX_WORDS];
uECC_word_t z[uECC_MAX_WORDS];
uECC_word_t sum[uECC_MAX_WORDS * 2];
uECC_word_t rx[uECC_MAX_WORDS];
uECC_word_t ry[uECC_MAX_WORDS];
uECC_word_t tx[uECC_MAX_WORDS];
uECC_word_t ty[uECC_MAX_WORDS];
uECC_word_t tz[uECC_MAX_WORDS];
const uECC_word_t *points[4];
const uECC_word_t *point;
bitcount_t num_bits;
bitcount_t i;
#if uECC_VLI_NATIVE_LITTLE_ENDIAN
uECC_word_t *_public = (uECC_word_t *)public_key;
#else
uECC_word_t _public[uECC_MAX_WORDS * 2];
#endif
uECC_word_t r[uECC_MAX_WORDS], s[uECC_MAX_WORDS];
wordcount_t num_words = curve->num_words;
wordcount_t num_n_words = BITS_TO_WORDS(curve->num_n_bits);
rx[num_n_words - 1] = 0;
r[num_n_words - 1] = 0;
s[num_n_words - 1] = 0;
#if uECC_VLI_NATIVE_LITTLE_ENDIAN
bcopy((uint8_t *) r, signature, curve->num_bytes);
bcopy((uint8_t *) s, signature + curve->num_bytes, curve->num_bytes);
#else
uECC_vli_bytesToNative(_public, public_key, curve->num_bytes);
uECC_vli_bytesToNative(
_public + num_words, public_key + curve->num_bytes, curve->num_bytes);
uECC_vli_bytesToNative(r, signature, curve->num_bytes);
uECC_vli_bytesToNative(s, signature + curve->num_bytes, curve->num_bytes);
#endif
/* r, s must not be 0. */
if (uECC_vli_isZero(r, num_words) || uECC_vli_isZero(s, num_words)) {
return 0;
}
/* r, s must be < n. */
if (uECC_vli_cmp_unsafe(curve->n, r, num_n_words) != 1 ||
uECC_vli_cmp_unsafe(curve->n, s, num_n_words) != 1) {
return 0;
}
/* Calculate u1 and u2. */
uECC_vli_modInv(z, s, curve->n, num_n_words); /* z = 1/s */
u1[num_n_words - 1] = 0;
bits2int(u1, message_hash, hash_size, curve);
uECC_vli_modMult(u1, u1, z, curve->n, num_n_words); /* u1 = e/s */
uECC_vli_modMult(u2, r, z, curve->n, num_n_words); /* u2 = r/s */
/* Calculate sum = G + Q. */
uECC_vli_set(sum, _public, num_words);
uECC_vli_set(sum + num_words, _public + num_words, num_words);
uECC_vli_set(tx, curve->G, num_words);
uECC_vli_set(ty, curve->G + num_words, num_words);
uECC_vli_modSub(z, sum, tx, curve->p, num_words); /* z = x2 - x1 */
XYcZ_add(tx, ty, sum, sum + num_words, curve);
uECC_vli_modInv(z, z, curve->p, num_words); /* z = 1/z */
apply_z(sum, sum + num_words, z, curve);
/* Use Shamir's trick to calculate u1*G + u2*Q */
points[0] = 0;
points[1] = curve->G;
points[2] = _public;
points[3] = sum;
num_bits = smax(uECC_vli_numBits(u1, num_n_words),
uECC_vli_numBits(u2, num_n_words));
point = points[(!!uECC_vli_testBit(u1, num_bits - 1)) |
((!!uECC_vli_testBit(u2, num_bits - 1)) << 1)];
uECC_vli_set(rx, point, num_words);
uECC_vli_set(ry, point + num_words, num_words);
uECC_vli_clear(z, num_words);
z[0] = 1;
for (i = num_bits - 2; i >= 0; --i) {
uECC_word_t index;
curve->double_jacobian(rx, ry, z, curve);
index = (!!uECC_vli_testBit(u1, i)) | ((!!uECC_vli_testBit(u2, i)) << 1);
point = points[index];
if (point) {
uECC_vli_set(tx, point, num_words);
uECC_vli_set(ty, point + num_words, num_words);
apply_z(tx, ty, z, curve);
uECC_vli_modSub(tz, rx, tx, curve->p, num_words); /* Z = x2 - x1 */
XYcZ_add(tx, ty, rx, ry, curve);
uECC_vli_modMult_fast(z, z, tz, curve);
}
}
uECC_vli_modInv(z, z, curve->p, num_words); /* Z = 1/Z */
apply_z(rx, ry, z, curve);
/* v = x1 (mod n) */
if (uECC_vli_cmp_unsafe(curve->n, rx, num_n_words) != 1) {
uECC_vli_sub(rx, rx, curve->n, num_n_words);
}
/* Accept only if v == r. */
return (int)(uECC_vli_equal(rx, r, num_words));
} | 812 | True | 1 |
|
CVE-2020-27347 | False | False | False | False | AV:L/AC:L/Au:N/C:P/I:P/A:P | LOCAL | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 4.6 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | HIGH | HIGH | HIGH | 7.8 | HIGH | 1.8 | 5.9 | False | [{'url': 'https://github.com/tmux/tmux/commit/a868bacb46e3c900530bed47a1c6f85b0fbe701c', 'name': 'https://github.com/tmux/tmux/commit/a868bacb46e3c900530bed47a1c6f85b0fbe701c', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://www.openwall.com/lists/oss-security/2020/11/05/3', 'name': 'https://www.openwall.com/lists/oss-security/2020/11/05/3', 'refsource': 'MISC', 'tags': ['Exploit', 'Mailing List', 'Third Party Advisory']}, {'url': 'https://security.gentoo.org/glsa/202011-10', 'name': 'GLSA-202011-10', 'refsource': 'GENTOO', 'tags': ['Third Party Advisory']}, {'url': 'https://raw.githubusercontent.com/tmux/tmux/3.1c/CHANGES', 'name': 'https://raw.githubusercontent.com/tmux/tmux/3.1c/CHANGES', 'refsource': 'CONFIRM', 'tags': []}] | [{'description': [{'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:tmux_project:tmux:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.9', 'versionEndIncluding': '3.1b', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'In tmux before version 3.1c the function input_csi_dispatch_sgr_colon() in file input.c contained a stack-based buffer-overflow that can be exploited by terminal output.'}] | 2020-11-17T19:15Z | 2020-11-06T03:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | nicm | 2020-10-29 16:33:01+00:00 | Do not write after the end of the array and overwrite the stack when
colon-separated SGR sequences contain empty arguments. Reported by Sergey
Nizovtsev. | a868bacb46e3c900530bed47a1c6f85b0fbe701c | False | tmux/tmux | tmux source code | 2015-06-03 23:32:55 | 2022-08-25 12:27:31 | tmux | 25995.0 | 1794.0 | input_csi_dispatch_sgr_colon | input_csi_dispatch_sgr_colon( struct input_ctx * ictx , u_int i) | ['ictx', 'i'] | input_csi_dispatch_sgr_colon(struct input_ctx *ictx, u_int i)
{
struct grid_cell *gc = &ictx->cell.cell;
char *s = ictx->param_list[i].str, *copy, *ptr, *out;
int p[8];
u_int n;
const char *errstr;
for (n = 0; n < nitems(p); n++)
p[n] = -1;
n = 0;
ptr = copy = xstrdup(s);
while ((out = strsep(&ptr, ":")) != NULL) {
if (*out != '\0') {
p[n++] = strtonum(out, 0, INT_MAX, &errstr);
if (errstr != NULL || n == nitems(p)) {
free(copy);
return;
}
} else
n++;
log_debug("%s: %u = %d", __func__, n - 1, p[n - 1]);
}
free(copy);
if (n == 0)
return;
if (p[0] == 4) {
if (n != 2)
return;
switch (p[1]) {
case 0:
gc->attr &= ~GRID_ATTR_ALL_UNDERSCORE;
break;
case 1:
gc->attr &= ~GRID_ATTR_ALL_UNDERSCORE;
gc->attr |= GRID_ATTR_UNDERSCORE;
break;
case 2:
gc->attr &= ~GRID_ATTR_ALL_UNDERSCORE;
gc->attr |= GRID_ATTR_UNDERSCORE_2;
break;
case 3:
gc->attr &= ~GRID_ATTR_ALL_UNDERSCORE;
gc->attr |= GRID_ATTR_UNDERSCORE_3;
break;
case 4:
gc->attr &= ~GRID_ATTR_ALL_UNDERSCORE;
gc->attr |= GRID_ATTR_UNDERSCORE_4;
break;
case 5:
gc->attr &= ~GRID_ATTR_ALL_UNDERSCORE;
gc->attr |= GRID_ATTR_UNDERSCORE_5;
break;
}
return;
}
if (n < 2 || (p[0] != 38 && p[0] != 48 && p[0] != 58))
return;
switch (p[1]) {
case 2:
if (n < 3)
break;
if (n == 5)
i = 2;
else
i = 3;
if (n < i + 3)
break;
input_csi_dispatch_sgr_rgb_do(ictx, p[0], p[i], p[i + 1],
p[i + 2]);
break;
case 5:
if (n < 3)
break;
input_csi_dispatch_sgr_256_do(ictx, p[0], p[2]);
break;
}
} | 460 | True | 1 |
|
CVE-2020-28248 | False | False | False | True | AV:N/AC:M/Au:N/C:P/I:P/A:P | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | PARTIAL | 6.8 | CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | REQUIRED | UNCHANGED | HIGH | HIGH | HIGH | 8.8 | HIGH | 2.8 | 5.9 | False | [{'url': 'https://github.com/gemini-testing/png-img/commit/14ac462a32ca4b3b78f56502ac976d5b0222ce3d', 'name': 'https://github.com/gemini-testing/png-img/commit/14ac462a32ca4b3b78f56502ac976d5b0222ce3d', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://securitylab.github.com/advisories/GHSL-2020-142-gemini-png-img', 'name': 'https://securitylab.github.com/advisories/GHSL-2020-142-gemini-png-img', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/gemini-testing/png-img', 'name': 'https://github.com/gemini-testing/png-img', 'refsource': 'MISC', 'tags': ['Product']}, {'url': 'https://github.com/gemini-testing/png-img/compare/v3.0.0...v3.1.0', 'name': 'https://github.com/gemini-testing/png-img/compare/v3.0.0...v3.1.0', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-787'}, {'lang': 'en', 'value': 'CWE-190'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:png-img_project:png-img:*:*:*:*:*:*:*:*', 'versionEndExcluding': '3.1.0', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'An integer overflow in the PngImg::InitStorage_() function of png-img before 3.1.0 leads to an under-allocation of heap memory and subsequently an exploitable heap-based buffer overflow when loading a crafted PNG file.'}] | 2021-07-21T11:39Z | 2021-02-20T00:15Z | Integer Overflow or Wraparound | The software performs a calculation that can produce an integer overflow or wraparound, when the logic assumes that the resulting value will always be larger than the original value. This can introduce other weaknesses when the calculation is used for resource management or execution control. | An integer overflow or wraparound occurs when an integer value is incremented to a value that is too large to store in the associated representation. When this occurs, the value may wrap to become a very small or negative number. While this may be intended behavior in circumstances that rely on wrapping, it can have security consequences if the wrap is unexpected. This is especially the case if the integer overflow can be triggered using user-supplied inputs. This becomes security-critical when the result is used to control looping, make a security decision, or determine the offset or size in behaviors such as memory allocation, copying, concatenation, etc.
| https://cwe.mitre.org/data/definitions/190.html | 0 | Mikhail Cheshkov | 2020-08-06 03:45:40+03:00 | Handle image size overflow | 14ac462a32ca4b3b78f56502ac976d5b0222ce3d | False | gemini-testing/png-img | Lite libpng wrapper for node.js | 2014-10-09 08:40:21 | 2022-07-21 10:20:49 | null | gemini-testing | 29.0 | 12.0 | PngImg::InitStorage_ | PngImg::InitStorage_() | [] | void PngImg::InitStorage_() {
rowPtrs_.resize(info_.height, nullptr);
data_ = new png_byte[info_.height * info_.rowbytes];
for(size_t i = 0; i < info_.height; ++i) {
rowPtrs_[i] = data_ + i * info_.rowbytes;
}
} | 63 | True | 1 |
CVE-2020-28248 | False | False | False | True | AV:N/AC:M/Au:N/C:P/I:P/A:P | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | PARTIAL | 6.8 | CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | REQUIRED | UNCHANGED | HIGH | HIGH | HIGH | 8.8 | HIGH | 2.8 | 5.9 | False | [{'url': 'https://github.com/gemini-testing/png-img/commit/14ac462a32ca4b3b78f56502ac976d5b0222ce3d', 'name': 'https://github.com/gemini-testing/png-img/commit/14ac462a32ca4b3b78f56502ac976d5b0222ce3d', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://securitylab.github.com/advisories/GHSL-2020-142-gemini-png-img', 'name': 'https://securitylab.github.com/advisories/GHSL-2020-142-gemini-png-img', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/gemini-testing/png-img', 'name': 'https://github.com/gemini-testing/png-img', 'refsource': 'MISC', 'tags': ['Product']}, {'url': 'https://github.com/gemini-testing/png-img/compare/v3.0.0...v3.1.0', 'name': 'https://github.com/gemini-testing/png-img/compare/v3.0.0...v3.1.0', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-787'}, {'lang': 'en', 'value': 'CWE-190'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:png-img_project:png-img:*:*:*:*:*:*:*:*', 'versionEndExcluding': '3.1.0', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'An integer overflow in the PngImg::InitStorage_() function of png-img before 3.1.0 leads to an under-allocation of heap memory and subsequently an exploitable heap-based buffer overflow when loading a crafted PNG file.'}] | 2021-07-21T11:39Z | 2021-02-20T00:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Mikhail Cheshkov | 2020-08-06 03:45:40+03:00 | Handle image size overflow | 14ac462a32ca4b3b78f56502ac976d5b0222ce3d | False | gemini-testing/png-img | Lite libpng wrapper for node.js | 2014-10-09 08:40:21 | 2022-07-21 10:20:49 | null | gemini-testing | 29.0 | 12.0 | PngImg::InitStorage_ | PngImg::InitStorage_() | [] | void PngImg::InitStorage_() {
rowPtrs_.resize(info_.height, nullptr);
data_ = new png_byte[info_.height * info_.rowbytes];
for(size_t i = 0; i < info_.height; ++i) {
rowPtrs_[i] = data_ + i * info_.rowbytes;
}
} | 63 | True | 1 |
CVE-2020-29074 | False | False | False | False | AV:N/AC:L/Au:S/C:P/I:P/A:P | NETWORK | LOW | SINGLE | PARTIAL | PARTIAL | PARTIAL | 6.5 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | HIGH | HIGH | HIGH | 8.8 | HIGH | 2.8 | 5.9 | False | [{'url': 'https://github.com/LibVNC/x11vnc/commit/69eeb9f7baa14ca03b16c9de821f9876def7a36a', 'name': 'https://github.com/LibVNC/x11vnc/commit/69eeb9f7baa14ca03b16c9de821f9876def7a36a', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://www.debian.org/security/2020/dsa-4799', 'name': 'DSA-4799', 'refsource': 'DEBIAN', 'tags': ['Third Party Advisory']}, {'url': 'https://lists.debian.org/debian-lts-announce/2020/12/msg00018.html', 'name': '[debian-lts-announce] 20201210 [SECURITY] [DLA 2490-1] x11vnc security update', 'refsource': 'MLIST', 'tags': ['Mailing List', 'Third Party Advisory']}, {'url': 'https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/MHVXHZE3YIP4RTWGQ24IDBSW44XPRDOC/', 'name': 'FEDORA-2021-c5b679877e', 'refsource': 'FEDORA', 'tags': ['Mailing List', 'Third Party Advisory']}, {'url': 'https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/H2FLWSVH32O6JXLRQBYDQLP7XRSTLUPQ/', 'name': 'FEDORA-2021-93911302d6', 'refsource': 'FEDORA', 'tags': ['Mailing List', 'Third Party Advisory']}, {'url': 'https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/PZL6NQTNK5PT63D2JX5YVV5OLUL76S5C/', 'name': 'FEDORA-2021-069c0c3950', 'refsource': 'FEDORA', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-732'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:x11vnc_project:x11vnc:0.9.16:*:*:*:*:*:*:*', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:fedoraproject:fedora:32:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:fedoraproject:fedora:33:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:fedoraproject:fedora:34:*:*:*:*:*:*:*', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'scan.c in x11vnc 0.9.16 uses IPC_CREAT|0777 in shmget calls, which allows access by actors other than the current user.'}] | 2021-07-21T11:39Z | 2020-11-25T23:15Z | Incorrect Permission Assignment for Critical Resource | The product specifies permissions for a security-critical resource in a way that allows that resource to be read or modified by unintended actors. | When a resource is given a permissions setting that provides access to a wider range of actors than required, it could lead to the exposure of sensitive information, or the modification of that resource by unintended parties. This is especially dangerous when the resource is related to program configuration, execution or sensitive user data.
| https://cwe.mitre.org/data/definitions/732.html | 0 | Guénal DAVALAN | 2020-11-18 08:40:45+01:00 | scan: limit access to shared memory segments to current user | 69eeb9f7baa14ca03b16c9de821f9876def7a36a | False | LibVNC/x11vnc | a VNC server for real X displays | 2014-09-02 16:04:15 | 2022-07-05 20:12:30 | null | LibVNC | 497.0 | 109.0 | shm_create | shm_create( XShmSegmentInfo * shm , XImage ** ximg_ptr , int w , int h , char * name) | ['shm', 'ximg_ptr', 'w', 'h', 'name'] | static int shm_create(XShmSegmentInfo *shm, XImage **ximg_ptr, int w, int h,
char *name) {
XImage *xim;
static int reported_flip = 0;
int db = 0;
shm->shmid = -1;
shm->shmaddr = (char *) -1;
*ximg_ptr = NULL;
if (nofb) {
return 1;
}
X_LOCK;
if (! using_shm || xform24to32 || raw_fb) {
/* we only need the XImage created */
xim = XCreateImage_wr(dpy, default_visual, depth, ZPixmap,
0, NULL, w, h, raw_fb ? 32 : BitmapPad(dpy), 0);
X_UNLOCK;
if (xim == NULL) {
rfbErr("XCreateImage(%s) failed.\n", name);
if (quiet) {
fprintf(stderr, "XCreateImage(%s) failed.\n",
name);
}
return 0;
}
if (db) fprintf(stderr, "shm_create simple %d %d\t%p %s\n", w, h, (void *)xim, name);
xim->data = (char *) malloc(xim->bytes_per_line * xim->height);
if (xim->data == NULL) {
rfbErr("XCreateImage(%s) data malloc failed.\n", name);
if (quiet) {
fprintf(stderr, "XCreateImage(%s) data malloc"
" failed.\n", name);
}
return 0;
}
if (flip_byte_order) {
char *order = flip_ximage_byte_order(xim);
if (! reported_flip && ! quiet) {
rfbLog("Changing XImage byte order"
" to %s\n", order);
reported_flip = 1;
}
}
*ximg_ptr = xim;
return 1;
}
if (! dpy) {
X_UNLOCK;
return 0;
}
xim = XShmCreateImage_wr(dpy, default_visual, depth, ZPixmap, NULL,
shm, w, h);
if (xim == NULL) {
rfbErr("XShmCreateImage(%s) failed.\n", name);
if (quiet) {
fprintf(stderr, "XShmCreateImage(%s) failed.\n", name);
}
X_UNLOCK;
return 0;
}
*ximg_ptr = xim;
#if HAVE_XSHM
shm->shmid = shmget(IPC_PRIVATE,
xim->bytes_per_line * xim->height, IPC_CREAT | 0777);
if (shm->shmid == -1) {
rfbErr("shmget(%s) failed.\n", name);
rfbLogPerror("shmget");
XDestroyImage(xim);
*ximg_ptr = NULL;
X_UNLOCK;
return 0;
}
shm->shmaddr = xim->data = (char *) shmat(shm->shmid, 0, 0);
if (shm->shmaddr == (char *)-1) {
rfbErr("shmat(%s) failed.\n", name);
rfbLogPerror("shmat");
XDestroyImage(xim);
*ximg_ptr = NULL;
shmctl(shm->shmid, IPC_RMID, 0);
shm->shmid = -1;
X_UNLOCK;
return 0;
}
shm->readOnly = False;
if (! XShmAttach_wr(dpy, shm)) {
rfbErr("XShmAttach(%s) failed.\n", name);
XDestroyImage(xim);
*ximg_ptr = NULL;
shmdt(shm->shmaddr);
shm->shmaddr = (char *) -1;
shmctl(shm->shmid, IPC_RMID, 0);
shm->shmid = -1;
X_UNLOCK;
return 0;
}
#endif
X_UNLOCK;
return 1;
} | 568 | True | 1 |
CVE-2020-29367 | False | False | False | True | AV:N/AC:M/Au:N/C:C/I:C/A:C | NETWORK | MEDIUM | NONE | COMPLETE | COMPLETE | COMPLETE | 9.3 | CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H | LOCAL | LOW | NONE | REQUIRED | UNCHANGED | HIGH | HIGH | HIGH | 7.8 | HIGH | 1.8 | 5.9 | False | [{'url': 'https://github.com/Blosc/c-blosc2/commit/c4c6470e88210afc95262c8b9fcc27e30ca043ee', 'name': 'https://github.com/Blosc/c-blosc2/commit/c4c6470e88210afc95262c8b9fcc27e30ca043ee', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=26442', 'name': 'https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=26442', 'refsource': 'MISC', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-787'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:c-blosc2_project:c-blosc2:2.0.0:a2:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:c-blosc2_project:c-blosc2:2.0.0:a3:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:c-blosc2_project:c-blosc2:2.0.0:a4:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:c-blosc2_project:c-blosc2:2.0.0:a5:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:c-blosc2_project:c-blosc2:2.0.0:beta1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:c-blosc2_project:c-blosc2:2.0.0:beta2:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:c-blosc2_project:c-blosc2:2.0.0:beta3:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:c-blosc2_project:c-blosc2:2.0.0:beta4:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:c-blosc2_project:c-blosc2:2.0.0:beta5:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'blosc2.c in Blosc C-Blosc2 through 2.0.0.beta.5 has a heap-based buffer overflow when there is a lack of space to write compressed data.'}] | 2020-12-03T20:58Z | 2020-11-27T20:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Nathan Moinvaziri | 2020-10-17 16:43:22-07:00 | Fixed asan heap buffer overflow when not enough space to write compressed block size. | c4c6470e88210afc95262c8b9fcc27e30ca043ee | False | Blosc/c-blosc2 | A fast, compressed, persistent binary data store library for C. | 2015-08-07 10:51:35 | 2022-08-24 15:12:42 | https://c-blosc2.readthedocs.io | Blosc | 256.0 | 47.0 | blosc_c | blosc_c( struct thread_context * thread_context , int32_t bsize , int32_t leftoverblock , int32_t ntbytes , int32_t maxbytes , const uint8_t * src , const int32_t offset , uint8_t * dest , uint8_t * tmp , uint8_t * tmp2) | ['thread_context', 'bsize', 'leftoverblock', 'ntbytes', 'maxbytes', 'src', 'offset', 'dest', 'tmp', 'tmp2'] | static int blosc_c(struct thread_context* thread_context, int32_t bsize,
int32_t leftoverblock, int32_t ntbytes, int32_t maxbytes,
const uint8_t* src, const int32_t offset, uint8_t* dest,
uint8_t* tmp, uint8_t* tmp2) {
blosc2_context* context = thread_context->parent_context;
int dont_split = (context->header_flags & 0x10) >> 4;
int dict_training = context->use_dict && context->dict_cdict == NULL;
int32_t j, neblock, nstreams;
int32_t cbytes; /* number of compressed bytes in split */
int32_t ctbytes = 0; /* number of compressed bytes in block */
int64_t maxout;
int32_t typesize = context->typesize;
const char* compname;
int accel;
const uint8_t* _src;
uint8_t *_tmp = tmp, *_tmp2 = tmp2;
uint8_t *_tmp3 = thread_context->tmp4;
int last_filter_index = last_filter(context->filters, 'c');
bool memcpyed = context->header_flags & (uint8_t)BLOSC_MEMCPYED;
if (last_filter_index >= 0 || context->prefilter != NULL) {
/* Apply the filter pipeline just for the prefilter */
if (memcpyed && context->prefilter != NULL) {
// We only need the prefilter output
_src = pipeline_c(thread_context, bsize, src, offset, dest, _tmp2, _tmp3);
if (_src == NULL) {
return -9; // signals a problem with the filter pipeline
}
return bsize;
}
/* Apply regular filter pipeline */
_src = pipeline_c(thread_context, bsize, src, offset, _tmp, _tmp2, _tmp3);
if (_src == NULL) {
return -9; // signals a problem with the filter pipeline
}
} else {
_src = src + offset;
}
assert(context->clevel > 0);
/* Calculate acceleration for different compressors */
accel = get_accel(context);
/* The number of compressed data streams for this block */
if (!dont_split && !leftoverblock && !dict_training) {
nstreams = (int32_t)typesize;
}
else {
nstreams = 1;
}
neblock = bsize / nstreams;
for (j = 0; j < nstreams; j++) {
if (!dict_training) {
dest += sizeof(int32_t);
ntbytes += sizeof(int32_t);
ctbytes += sizeof(int32_t);
}
// See if we have a run here
const uint8_t* ip = (uint8_t*)_src + j * neblock;
const uint8_t* ipbound = (uint8_t*)_src + (j + 1) * neblock;
if (get_run(ip, ipbound)) {
// A run. Encode the repeated byte as a negative length in the length of the split.
int32_t value = _src[j * neblock];
_sw32(dest - 4, -value);
continue;
}
maxout = neblock;
#if defined(HAVE_SNAPPY)
if (context->compcode == BLOSC_SNAPPY) {
maxout = (int32_t)snappy_max_compressed_length((size_t)neblock);
}
#endif /* HAVE_SNAPPY */
if (ntbytes + maxout > maxbytes) {
/* avoid buffer * overrun */
maxout = (int64_t)maxbytes - (int64_t)ntbytes;
if (maxout <= 0) {
return 0; /* non-compressible block */
}
}
if (dict_training) {
// We are in the build dict state, so don't compress
// TODO: copy only a percentage for sampling
memcpy(dest, _src + j * neblock, (unsigned int)neblock);
cbytes = (int32_t)neblock;
}
else if (context->compcode == BLOSC_BLOSCLZ) {
cbytes = blosclz_compress(context->clevel, _src + j * neblock,
(int)neblock, dest, (int)maxout);
}
#if defined(HAVE_LZ4)
else if (context->compcode == BLOSC_LZ4) {
void *hash_table = NULL;
#ifdef HAVE_IPP
hash_table = (void*)thread_context->lz4_hash_table;
#endif
cbytes = lz4_wrap_compress((char*)_src + j * neblock, (size_t)neblock,
(char*)dest, (size_t)maxout, accel, hash_table);
}
else if (context->compcode == BLOSC_LZ4HC) {
cbytes = lz4hc_wrap_compress((char*)_src + j * neblock, (size_t)neblock,
(char*)dest, (size_t)maxout, context->clevel);
}
#endif /* HAVE_LZ4 */
#if defined(HAVE_LIZARD)
else if (context->compcode == BLOSC_LIZARD) {
cbytes = lizard_wrap_compress((char*)_src + j * neblock, (size_t)neblock,
(char*)dest, (size_t)maxout, accel);
}
#endif /* HAVE_LIZARD */
#if defined(HAVE_SNAPPY)
else if (context->compcode == BLOSC_SNAPPY) {
cbytes = snappy_wrap_compress((char*)_src + j * neblock, (size_t)neblock,
(char*)dest, (size_t)maxout);
}
#endif /* HAVE_SNAPPY */
#if defined(HAVE_ZLIB)
else if (context->compcode == BLOSC_ZLIB) {
cbytes = zlib_wrap_compress((char*)_src + j * neblock, (size_t)neblock,
(char*)dest, (size_t)maxout, context->clevel);
}
#endif /* HAVE_ZLIB */
#if defined(HAVE_ZSTD)
else if (context->compcode == BLOSC_ZSTD) {
cbytes = zstd_wrap_compress(thread_context,
(char*)_src + j * neblock, (size_t)neblock,
(char*)dest, (size_t)maxout, context->clevel);
}
#endif /* HAVE_ZSTD */
else {
blosc_compcode_to_compname(context->compcode, &compname);
fprintf(stderr, "Blosc has not been compiled with '%s' ", compname);
fprintf(stderr, "compression support. Please use one having it.");
return -5; /* signals no compression support */
}
if (cbytes > maxout) {
/* Buffer overrun caused by compression (should never happen) */
return -1;
}
if (cbytes < 0) {
/* cbytes should never be negative */
return -2;
}
if (!dict_training) {
if (cbytes == 0 || cbytes == neblock) {
/* The compressor has been unable to compress data at all. */
/* Before doing the copy, check that we are not running into a
buffer overflow. */
if ((ntbytes + neblock) > maxbytes) {
return 0; /* Non-compressible data */
}
memcpy(dest, _src + j * neblock, (unsigned int)neblock);
cbytes = neblock;
}
_sw32(dest - 4, cbytes);
}
dest += cbytes;
ntbytes += cbytes;
ctbytes += cbytes;
} /* Closes j < nstreams */
//printf("c%d", ctbytes);
return ctbytes;
} | 969 | True | 1 |
CVE-2020-35518 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:N/A:N | NETWORK | LOW | NONE | PARTIAL | NONE | NONE | 5.0 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N | NETWORK | LOW | NONE | NONE | UNCHANGED | LOW | NONE | NONE | 5.3 | MEDIUM | 3.9 | 1.4 | False | [{'url': 'https://bugzilla.redhat.com/show_bug.cgi?id=1905565', 'name': 'https://bugzilla.redhat.com/show_bug.cgi?id=1905565', 'refsource': 'MISC', 'tags': ['Issue Tracking', 'Patch', 'Vendor Advisory']}, {'url': 'https://github.com/389ds/389-ds-base/issues/4480', 'name': 'https://github.com/389ds/389-ds-base/issues/4480', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/389ds/389-ds-base/commit/cc0f69283abc082488824702dae485b8eae938bc', 'name': 'https://github.com/389ds/389-ds-base/commit/cc0f69283abc082488824702dae485b8eae938bc', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/389ds/389-ds-base/commit/b6aae4d8e7c8a6ddd21646f94fef1bf7f22c3f32', 'name': 'https://github.com/389ds/389-ds-base/commit/b6aae4d8e7c8a6ddd21646f94fef1bf7f22c3f32', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-203'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:redhat:389_directory_server:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:redhat:389_directory_server:*:*:*:*:*:*:*:*', 'versionStartIncluding': '1.4.4.0', 'versionEndExcluding': '1.4.4.13', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:redhat:389_directory_server:*:*:*:*:*:*:*:*', 'versionEndExcluding': '1.4.3.19', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:redhat:enterprise_linux:7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:redhat:enterprise_linux:8.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:redhat:directory_server:11.0:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'When binding against a DN during authentication, the reply from 389-ds-base will be different whether the DN exists or not. This can be used by an unauthenticated attacker to check the existence of an entry in the LDAP database.'}] | 2022-08-05T17:42Z | 2021-03-26T17:15Z | Observable Discrepancy | The product behaves differently or sends different responses under different circumstances in a way that is observable to an unauthorized actor, which exposes security-relevant information about the state of the product, such as whether a particular operation was successful or not. | Discrepancies can take many forms, and variations may be detectable in timing, control flow, communications such as replies or requests, or general behavior. These discrepancies can reveal information about the product's operation or internal state to an unauthorized actor. In some cases, discrepancies can be used by attackers to form a side channel.
| https://cwe.mitre.org/data/definitions/203.html | 0 | tbordaz | 2020-12-16 16:30:28+01:00 | Issue 4480 - Unexpected info returned to ldap request (#4491)
Bug description:
If the bind entry does not exist, the bind result info
reports that 'No such entry'. It should not give any
information if the target entry exists or not
Fix description:
Does not return any additional information during a bind
relates: https://github.com/389ds/389-ds-base/issues/4480
Reviewed by: William Brown, Viktor Ashirov, Mark Reynolds (thank you all)
Platforms tested: F31 | cc0f69283abc082488824702dae485b8eae938bc | False | 389ds/389-ds-base | The enterprise-class Open Source LDAP server for Linux | 2020-09-12 12:54:14 | 2022-08-26 16:45:16 | https://www.port389.org/ | 389ds | 87.0 | 54.0 | ldbm_config_search_entry_callback | ldbm_config_search_entry_callback( Slapi_PBlock * pb __attribute__((unused)) , Slapi_Entry * e , Slapi_Entry * entryAfter __attribute__((unused)) , int * returncode , char * returntext , void * arg) | ['__attribute__', 'e', '__attribute__', 'returncode', 'returntext', 'arg'] | ldbm_config_search_entry_callback(Slapi_PBlock *pb __attribute__((unused)),
Slapi_Entry *e,
Slapi_Entry *entryAfter __attribute__((unused)),
int *returncode,
char *returntext,
void *arg)
{
char buf[BUFSIZ];
struct berval *vals[2];
struct berval val;
struct ldbminfo *li = (struct ldbminfo *)arg;
config_info *config;
int scope;
vals[0] = &val;
vals[1] = NULL;
returntext[0] = '\0';
PR_Lock(li->li_config_mutex);
if (pb) {
slapi_pblock_get(pb, SLAPI_SEARCH_SCOPE, &scope);
if (scope == LDAP_SCOPE_BASE) {
char **attrs = NULL;
slapi_pblock_get(pb, SLAPI_SEARCH_ATTRS, &attrs);
if (attrs) {
for (size_t i = 0; attrs[i]; i++) {
if (ldbm_config_moved_attr(attrs[i])) {
slapi_pblock_set(pb, SLAPI_PB_RESULT_TEXT, "at least one required attribute has been moved to the BDB scecific configuration entry");
break;
}
}
}
}
}
for (config = ldbm_config; config->config_name != NULL; config++) {
/* Go through the ldbm_config table and fill in the entry. */
if (!(config->config_flags & (CONFIG_FLAG_ALWAYS_SHOW | CONFIG_FLAG_PREVIOUSLY_SET))) {
/* This config option shouldn't be shown */
continue;
}
ldbm_config_get((void *)li, config, buf);
val.bv_val = buf;
val.bv_len = strlen(buf);
slapi_entry_attr_replace(e, config->config_name, vals);
}
PR_Unlock(li->li_config_mutex);
*returncode = LDAP_SUCCESS;
return SLAPI_DSE_CALLBACK_OK;
} | 282 | True | 1 |
CVE-2020-35518 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:N/A:N | NETWORK | LOW | NONE | PARTIAL | NONE | NONE | 5.0 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N | NETWORK | LOW | NONE | NONE | UNCHANGED | LOW | NONE | NONE | 5.3 | MEDIUM | 3.9 | 1.4 | False | [{'url': 'https://bugzilla.redhat.com/show_bug.cgi?id=1905565', 'name': 'https://bugzilla.redhat.com/show_bug.cgi?id=1905565', 'refsource': 'MISC', 'tags': ['Issue Tracking', 'Patch', 'Vendor Advisory']}, {'url': 'https://github.com/389ds/389-ds-base/issues/4480', 'name': 'https://github.com/389ds/389-ds-base/issues/4480', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/389ds/389-ds-base/commit/cc0f69283abc082488824702dae485b8eae938bc', 'name': 'https://github.com/389ds/389-ds-base/commit/cc0f69283abc082488824702dae485b8eae938bc', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/389ds/389-ds-base/commit/b6aae4d8e7c8a6ddd21646f94fef1bf7f22c3f32', 'name': 'https://github.com/389ds/389-ds-base/commit/b6aae4d8e7c8a6ddd21646f94fef1bf7f22c3f32', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-203'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:redhat:389_directory_server:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:redhat:389_directory_server:*:*:*:*:*:*:*:*', 'versionStartIncluding': '1.4.4.0', 'versionEndExcluding': '1.4.4.13', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:redhat:389_directory_server:*:*:*:*:*:*:*:*', 'versionEndExcluding': '1.4.3.19', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:redhat:enterprise_linux:7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:redhat:enterprise_linux:8.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:redhat:directory_server:11.0:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'When binding against a DN during authentication, the reply from 389-ds-base will be different whether the DN exists or not. This can be used by an unauthenticated attacker to check the existence of an entry in the LDAP database.'}] | 2022-08-05T17:42Z | 2021-03-26T17:15Z | Observable Discrepancy | The product behaves differently or sends different responses under different circumstances in a way that is observable to an unauthorized actor, which exposes security-relevant information about the state of the product, such as whether a particular operation was successful or not. | Discrepancies can take many forms, and variations may be detectable in timing, control flow, communications such as replies or requests, or general behavior. These discrepancies can reveal information about the product's operation or internal state to an unauthorized actor. In some cases, discrepancies can be used by attackers to form a side channel.
| https://cwe.mitre.org/data/definitions/203.html | 0 | tbordaz | 2020-12-16 16:30:28+01:00 | Issue 4480 - Unexpected info returned to ldap request (#4491)
Bug description:
If the bind entry does not exist, the bind result info
reports that 'No such entry'. It should not give any
information if the target entry exists or not
Fix description:
Does not return any additional information during a bind
relates: https://github.com/389ds/389-ds-base/issues/4480
Reviewed by: William Brown, Viktor Ashirov, Mark Reynolds (thank you all)
Platforms tested: F31 | cc0f69283abc082488824702dae485b8eae938bc | False | 389ds/389-ds-base | The enterprise-class Open Source LDAP server for Linux | 2020-09-12 12:54:14 | 2022-08-26 16:45:16 | https://www.port389.org/ | 389ds | 87.0 | 54.0 | send_ldap_result_ext | send_ldap_result_ext( Slapi_PBlock * pb , int err , char * matched , char * text , int nentries , struct berval ** urls , BerElement * ber) | ['pb', 'err', 'matched', 'text', 'nentries', 'urls', 'ber'] | send_ldap_result_ext(
Slapi_PBlock *pb,
int err,
char *matched,
char *text,
int nentries,
struct berval **urls,
BerElement *ber)
{
Slapi_Operation *operation;
passwdPolicy *pwpolicy = NULL;
Connection *conn = NULL;
Slapi_DN *sdn = NULL;
const char *dn = NULL;
ber_tag_t tag;
int flush_ber_element = 1;
ber_tag_t bind_method = 0;
int internal_op;
int i, rc, logit = 0;
char *pbtext;
slapi_pblock_get(pb, SLAPI_BIND_METHOD, &bind_method);
slapi_pblock_get(pb, SLAPI_OPERATION, &operation);
slapi_pblock_get(pb, SLAPI_CONNECTION, &conn);
if (text) {
pbtext = text;
} else {
slapi_pblock_get(pb, SLAPI_PB_RESULT_TEXT, &pbtext);
}
if (operation == NULL) {
slapi_log_err(SLAPI_LOG_ERR, "send_ldap_result_ext", "No operation found: slapi_search_internal_set_pb was incomplete (invalid 'base' ?)\n");
return;
}
if (operation->o_status == SLAPI_OP_STATUS_RESULT_SENT) {
return; /* result already sent */
}
if (ber != NULL) {
flush_ber_element = 0;
}
if (err != LDAP_SUCCESS) {
/* count the error for snmp */
/* first check for security errors */
if (err == LDAP_INVALID_CREDENTIALS || err == LDAP_INAPPROPRIATE_AUTH || err == LDAP_AUTH_METHOD_NOT_SUPPORTED || err == LDAP_STRONG_AUTH_NOT_SUPPORTED || err == LDAP_STRONG_AUTH_REQUIRED || err == LDAP_CONFIDENTIALITY_REQUIRED || err == LDAP_INSUFFICIENT_ACCESS || err == LDAP_AUTH_UNKNOWN) {
slapi_counter_increment(g_get_global_snmp_vars()->ops_tbl.dsSecurityErrors);
} else if (err != LDAP_REFERRAL && err != LDAP_OPT_REFERRALS && err != LDAP_PARTIAL_RESULTS) {
/*madman man spec says not to count as normal errors
--security errors
--referrals
-- partially seviced operations will not be conted as an error
*/
slapi_counter_increment(g_get_global_snmp_vars()->ops_tbl.dsErrors);
}
}
slapi_log_err(SLAPI_LOG_TRACE, "send_ldap_result_ext", "=> %d:%s:%s\n", err,
matched ? matched : "", text ? text : "");
switch (operation->o_tag) {
case LBER_DEFAULT:
tag = LBER_SEQUENCE;
break;
case LDAP_REQ_SEARCH:
tag = LDAP_RES_SEARCH_RESULT;
break;
case LDAP_REQ_DELETE:
tag = LDAP_RES_DELETE;
break;
case LDAP_REFERRAL:
if (conn && conn->c_ldapversion > LDAP_VERSION2) {
tag = LDAP_TAG_REFERRAL;
break;
}
/* FALLTHROUGH */
default:
tag = operation->o_tag + 1;
break;
}
internal_op = operation_is_flag_set(operation, OP_FLAG_INTERNAL);
if ((conn == NULL) || (internal_op)) {
if (operation->o_result_handler != NULL) {
operation->o_result_handler(conn, operation, err,
matched, text, nentries, urls);
logit = 1;
}
goto log_and_return;
}
/* invalid password. Update the password retry here */
/* put this here for now. It could be a send_result pre-op plugin. */
if ((err == LDAP_INVALID_CREDENTIALS) && (bind_method != LDAP_AUTH_SASL)) {
slapi_pblock_get(pb, SLAPI_TARGET_SDN, &sdn);
dn = slapi_sdn_get_dn(sdn);
pwpolicy = new_passwdPolicy(pb, dn);
if (pwpolicy && (pwpolicy->pw_lockout == 1)) {
if (update_pw_retry(pb) == LDAP_CONSTRAINT_VIOLATION && !pwpolicy->pw_is_legacy) {
/*
* If we are not using the legacy pw policy behavior,
* convert the error 49 to 19 (constraint violation)
* and log a message
*/
err = LDAP_CONSTRAINT_VIOLATION;
text = "Invalid credentials, you now have exceeded the password retry limit.";
}
}
}
if (ber == NULL) {
if ((ber = der_alloc()) == NULL) {
slapi_log_err(SLAPI_LOG_ERR, "send_ldap_result_ext", "ber_alloc failed\n");
goto log_and_return;
}
}
/* there is no admin limit exceeded in v2 - change to size limit XXX */
if (err == LDAP_ADMINLIMIT_EXCEEDED &&
conn->c_ldapversion < LDAP_VERSION3) {
err = LDAP_SIZELIMIT_EXCEEDED;
}
if (conn->c_ldapversion < LDAP_VERSION3 || urls == NULL) {
char *save, *buf = NULL;
/*
* if there are v2 referrals to send, construct
* the v2 referral string.
*/
if (urls != NULL) {
int len;
/* count the referral */
slapi_counter_increment(g_get_global_snmp_vars()->ops_tbl.dsReferrals);
/*
* figure out how much space we need
*/
len = 10; /* strlen("Referral:") + NULL */
for (i = 0; urls[i] != NULL; i++) {
len += urls[i]->bv_len + 1; /* newline + ref */
}
if (text != NULL) {
len += strlen(text) + 1; /* text + newline */
}
/*
* allocate buffer and fill it in with the error
* message plus v2-style referrals.
*/
buf = slapi_ch_malloc(len);
*buf = '\0';
if (text != NULL) {
strcpy(buf, text);
strcat(buf, "\n");
}
strcat(buf, "Referral:");
for (i = 0; urls[i] != NULL; i++) {
strcat(buf, "\n");
strcat(buf, urls[i]->bv_val);
}
save = text;
text = buf;
}
if ((conn->c_ldapversion < LDAP_VERSION3 &&
err == LDAP_REFERRAL) ||
urls != NULL) {
err = LDAP_PARTIAL_RESULTS;
}
rc = ber_printf(ber, "{it{ess", operation->o_msgid, tag, err,
matched ? matched : "", pbtext ? pbtext : "");
/*
* if this is an LDAPv3 ExtendedResponse to an ExtendedRequest,
* check to see if the optional responseName and response OCTET
* STRING need to be appended.
*/
if (rc != LBER_ERROR) {
rc = check_and_send_extended_result(pb, tag, ber);
}
/*
* if this is an LDAPv3 BindResponse, check to see if the
* optional serverSaslCreds OCTET STRING is present and needs
* to be appended.
*/
if (rc != LBER_ERROR) {
rc = check_and_send_SASL_response(pb, tag, ber, conn);
/* XXXmcs: should we also check for a missing auth response control? */
}
if (rc != LBER_ERROR) {
rc = ber_printf(ber, "}"); /* one more } to come */
}
if (buf != NULL) {
text = save;
slapi_ch_free((void **)&buf);
}
} else {
/*
* there are v3 referrals to add to the result
*/
/* count the referral */
if (!config_check_referral_mode())
slapi_counter_increment(g_get_global_snmp_vars()->ops_tbl.dsReferrals);
rc = ber_printf(ber, "{it{esst{s", operation->o_msgid, tag, err,
matched ? matched : "", text ? text : "", LDAP_TAG_REFERRAL,
urls[0]->bv_val);
for (i = 1; urls[i] != NULL && rc != LBER_ERROR; i++) {
rc = ber_printf(ber, "s", urls[i]->bv_val);
}
if (rc != LBER_ERROR) {
rc = ber_printf(ber, "}"); /* two more } to come */
}
/*
* if this is an LDAPv3 ExtendedResponse to an ExtendedRequest,
* check to see if the optional responseName and response OCTET
* STRING need to be appended.
*/
if (rc != LBER_ERROR) {
rc = check_and_send_extended_result(pb, tag, ber);
}
/*
* if this is an LDAPv3 BindResponse, check to see if the
* optional serverSaslCreds OCTET STRING is present and needs
* to be appended.
*/
if (rc != LBER_ERROR) {
rc = check_and_send_SASL_response(pb, tag, ber, conn);
}
if (rc != LBER_ERROR) {
rc = ber_printf(ber, "}"); /* one more } to come */
}
}
if (err == LDAP_SUCCESS) {
/*
* Process the Read Entry Controls (if any)
*/
if (process_read_entry_controls(pb, LDAP_CONTROL_PRE_READ_ENTRY)) {
err = LDAP_UNAVAILABLE_CRITICAL_EXTENSION;
goto log_and_return;
}
if (process_read_entry_controls(pb, LDAP_CONTROL_POST_READ_ENTRY)) {
err = LDAP_UNAVAILABLE_CRITICAL_EXTENSION;
goto log_and_return;
}
}
if (operation->o_results.result_controls != NULL && conn->c_ldapversion >= LDAP_VERSION3 && write_controls(ber, operation->o_results.result_controls) != 0) {
rc = (int)LBER_ERROR;
}
if (rc != LBER_ERROR) { /* end the LDAPMessage sequence */
rc = ber_put_seq(ber);
}
if (rc == LBER_ERROR) {
slapi_log_err(SLAPI_LOG_ERR, "send_ldap_result_ext", "ber_printf failed 1\n");
if (flush_ber_element == 1) {
/* we alloced the ber */
ber_free(ber, 1 /* freebuf */);
}
goto log_and_return;
}
if (flush_ber_element) {
/* write only one pdu at a time - wait til it's our turn */
if (flush_ber(pb, conn, operation, ber, _LDAP_SEND_RESULT) == 0) {
logit = 1;
}
}
log_and_return:
operation->o_status = SLAPI_OP_STATUS_RESULT_SENT; /* in case this has not yet been set */
if (logit && (operation_is_flag_set(operation, OP_FLAG_ACTION_LOG_ACCESS) ||
(internal_op && config_get_plugin_logging()))) {
log_result(pb, operation, err, tag, nentries);
}
slapi_log_err(SLAPI_LOG_TRACE, "send_ldap_result_ext", "<= %d\n", err);
} | 1250 | True | 1 |
CVE-2020-35518 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:N/A:N | NETWORK | LOW | NONE | PARTIAL | NONE | NONE | 5.0 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N | NETWORK | LOW | NONE | NONE | UNCHANGED | LOW | NONE | NONE | 5.3 | MEDIUM | 3.9 | 1.4 | False | [{'url': 'https://bugzilla.redhat.com/show_bug.cgi?id=1905565', 'name': 'https://bugzilla.redhat.com/show_bug.cgi?id=1905565', 'refsource': 'MISC', 'tags': ['Issue Tracking', 'Patch', 'Vendor Advisory']}, {'url': 'https://github.com/389ds/389-ds-base/issues/4480', 'name': 'https://github.com/389ds/389-ds-base/issues/4480', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/389ds/389-ds-base/commit/cc0f69283abc082488824702dae485b8eae938bc', 'name': 'https://github.com/389ds/389-ds-base/commit/cc0f69283abc082488824702dae485b8eae938bc', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/389ds/389-ds-base/commit/b6aae4d8e7c8a6ddd21646f94fef1bf7f22c3f32', 'name': 'https://github.com/389ds/389-ds-base/commit/b6aae4d8e7c8a6ddd21646f94fef1bf7f22c3f32', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-203'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:redhat:389_directory_server:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:redhat:389_directory_server:*:*:*:*:*:*:*:*', 'versionStartIncluding': '1.4.4.0', 'versionEndExcluding': '1.4.4.13', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:redhat:389_directory_server:*:*:*:*:*:*:*:*', 'versionEndExcluding': '1.4.3.19', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:redhat:enterprise_linux:7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:redhat:enterprise_linux:8.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:redhat:directory_server:11.0:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'When binding against a DN during authentication, the reply from 389-ds-base will be different whether the DN exists or not. This can be used by an unauthenticated attacker to check the existence of an entry in the LDAP database.'}] | 2022-08-05T17:42Z | 2021-03-26T17:15Z | Observable Discrepancy | The product behaves differently or sends different responses under different circumstances in a way that is observable to an unauthorized actor, which exposes security-relevant information about the state of the product, such as whether a particular operation was successful or not. | Discrepancies can take many forms, and variations may be detectable in timing, control flow, communications such as replies or requests, or general behavior. These discrepancies can reveal information about the product's operation or internal state to an unauthorized actor. In some cases, discrepancies can be used by attackers to form a side channel.
| https://cwe.mitre.org/data/definitions/203.html | 0 | Mark Reynolds | 2021-02-09 14:02:59-05:00 | Issue 4609 - CVE - info disclosure when authenticating
Description: If you bind as a user that does not exist. Error 49 is returned
instead of error 32. As error 32 discloses that the entry does
not exist. When you bind as an entry that does not have userpassword
set then error 48 (inappropriate auth) is returned, but this
discloses that the entry does indeed exist. Instead we should
always return error 49, even if the password is not set in the
entry. This way we do not disclose to an attacker if the Bind
DN exists or not.
Relates: https://github.com/389ds/389-ds-base/issues/4609
Reviewed by: tbordaz(Thanks!) | b6aae4d8e7c8a6ddd21646f94fef1bf7f22c3f32 | False | 389ds/389-ds-base | The enterprise-class Open Source LDAP server for Linux | 2020-09-12 12:54:14 | 2022-08-26 16:45:16 | https://www.port389.org/ | 389ds | 87.0 | 54.0 | ldbm_back_bind | ldbm_back_bind( Slapi_PBlock * pb) | ['pb'] | ldbm_back_bind(Slapi_PBlock *pb)
{
backend *be;
ldbm_instance *inst;
ber_tag_t method;
struct berval *cred;
struct ldbminfo *li;
struct backentry *e;
Slapi_Attr *attr;
Slapi_Value **bvals;
entry_address *addr;
back_txn txn = {NULL};
int rc = SLAPI_BIND_SUCCESS;
int result_sent = 0;
/* get parameters */
slapi_pblock_get(pb, SLAPI_BACKEND, &be);
slapi_pblock_get(pb, SLAPI_PLUGIN_PRIVATE, &li);
slapi_pblock_get(pb, SLAPI_TARGET_ADDRESS, &addr);
slapi_pblock_get(pb, SLAPI_BIND_METHOD, &method);
slapi_pblock_get(pb, SLAPI_BIND_CREDENTIALS, &cred);
slapi_pblock_get(pb, SLAPI_TXN, &txn.back_txn_txn);
if (!txn.back_txn_txn) {
dblayer_txn_init(li, &txn);
slapi_pblock_set(pb, SLAPI_TXN, txn.back_txn_txn);
}
inst = (ldbm_instance *)be->be_instance_info;
if (inst->inst_ref_count) {
slapi_counter_increment(inst->inst_ref_count);
} else {
slapi_log_err(SLAPI_LOG_ERR, "ldbm_back_bind",
"instance %s does not exist.\n", inst->inst_name);
return (SLAPI_BIND_FAIL);
}
/* always allow noauth simple binds (front end will send the result) */
if (method == LDAP_AUTH_SIMPLE && cred->bv_len == 0) {
rc = SLAPI_BIND_ANONYMOUS;
goto bail;
}
/*
* find the target entry. find_entry() takes care of referrals
* and sending errors if the entry does not exist.
*/
if ((e = find_entry(pb, be, addr, &txn, &result_sent)) == NULL) {
rc = SLAPI_BIND_FAIL;
/* In the failure case, the result is supposed to be sent in the backend. */
if (!result_sent) {
slapi_send_ldap_result(pb, LDAP_INAPPROPRIATE_AUTH, NULL, NULL, 0, NULL);
}
goto bail;
}
switch (method) {
case LDAP_AUTH_SIMPLE: {
Slapi_Value cv;
if (slapi_entry_attr_find(e->ep_entry, "userpassword", &attr) != 0) {
slapi_send_ldap_result(pb, LDAP_INAPPROPRIATE_AUTH, NULL,
NULL, 0, NULL);
CACHE_RETURN(&inst->inst_cache, &e);
rc = SLAPI_BIND_FAIL;
goto bail;
}
bvals = attr_get_present_values(attr);
slapi_value_init_berval(&cv, cred);
if (slapi_pw_find_sv(bvals, &cv) != 0) {
slapi_pblock_set(pb, SLAPI_PB_RESULT_TEXT, "Invalid credentials");
slapi_send_ldap_result(pb, LDAP_INVALID_CREDENTIALS, NULL, NULL, 0, NULL);
CACHE_RETURN(&inst->inst_cache, &e);
value_done(&cv);
rc = SLAPI_BIND_FAIL;
goto bail;
}
value_done(&cv);
} break;
default:
slapi_send_ldap_result(pb, LDAP_STRONG_AUTH_NOT_SUPPORTED, NULL,
"auth method not supported", 0, NULL);
CACHE_RETURN(&inst->inst_cache, &e);
rc = SLAPI_BIND_FAIL;
goto bail;
}
CACHE_RETURN(&inst->inst_cache, &e);
bail:
if (inst->inst_ref_count) {
slapi_counter_decrement(inst->inst_ref_count);
}
/* success: front end will send result */
return rc;
} | 490 | True | 1 |
CVE-2020-35518 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:N/A:N | NETWORK | LOW | NONE | PARTIAL | NONE | NONE | 5.0 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N | NETWORK | LOW | NONE | NONE | UNCHANGED | LOW | NONE | NONE | 5.3 | MEDIUM | 3.9 | 1.4 | False | [{'url': 'https://bugzilla.redhat.com/show_bug.cgi?id=1905565', 'name': 'https://bugzilla.redhat.com/show_bug.cgi?id=1905565', 'refsource': 'MISC', 'tags': ['Issue Tracking', 'Patch', 'Vendor Advisory']}, {'url': 'https://github.com/389ds/389-ds-base/issues/4480', 'name': 'https://github.com/389ds/389-ds-base/issues/4480', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/389ds/389-ds-base/commit/cc0f69283abc082488824702dae485b8eae938bc', 'name': 'https://github.com/389ds/389-ds-base/commit/cc0f69283abc082488824702dae485b8eae938bc', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/389ds/389-ds-base/commit/b6aae4d8e7c8a6ddd21646f94fef1bf7f22c3f32', 'name': 'https://github.com/389ds/389-ds-base/commit/b6aae4d8e7c8a6ddd21646f94fef1bf7f22c3f32', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-203'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:redhat:389_directory_server:*:*:*:*:*:*:*:*', 'versionStartIncluding': '2.0.0', 'versionEndExcluding': '2.0.3', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:redhat:389_directory_server:*:*:*:*:*:*:*:*', 'versionStartIncluding': '1.4.4.0', 'versionEndExcluding': '1.4.4.13', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:redhat:389_directory_server:*:*:*:*:*:*:*:*', 'versionEndExcluding': '1.4.3.19', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:redhat:enterprise_linux:7.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:redhat:enterprise_linux:8.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:redhat:directory_server:11.0:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'When binding against a DN during authentication, the reply from 389-ds-base will be different whether the DN exists or not. This can be used by an unauthenticated attacker to check the existence of an entry in the LDAP database.'}] | 2022-08-05T17:42Z | 2021-03-26T17:15Z | Observable Discrepancy | The product behaves differently or sends different responses under different circumstances in a way that is observable to an unauthorized actor, which exposes security-relevant information about the state of the product, such as whether a particular operation was successful or not. | Discrepancies can take many forms, and variations may be detectable in timing, control flow, communications such as replies or requests, or general behavior. These discrepancies can reveal information about the product's operation or internal state to an unauthorized actor. In some cases, discrepancies can be used by attackers to form a side channel.
| https://cwe.mitre.org/data/definitions/203.html | 0 | Mark Reynolds | 2021-02-09 14:02:59-05:00 | Issue 4609 - CVE - info disclosure when authenticating
Description: If you bind as a user that does not exist. Error 49 is returned
instead of error 32. As error 32 discloses that the entry does
not exist. When you bind as an entry that does not have userpassword
set then error 48 (inappropriate auth) is returned, but this
discloses that the entry does indeed exist. Instead we should
always return error 49, even if the password is not set in the
entry. This way we do not disclose to an attacker if the Bind
DN exists or not.
Relates: https://github.com/389ds/389-ds-base/issues/4609
Reviewed by: tbordaz(Thanks!) | b6aae4d8e7c8a6ddd21646f94fef1bf7f22c3f32 | False | 389ds/389-ds-base | The enterprise-class Open Source LDAP server for Linux | 2020-09-12 12:54:14 | 2022-08-26 16:45:16 | https://www.port389.org/ | 389ds | 87.0 | 54.0 | dse_bind | dse_bind( Slapi_PBlock * pb) | ['pb'] | dse_bind(Slapi_PBlock *pb) /* JCM There should only be one exit point from this function! */
{
ber_tag_t method; /* The bind method */
struct berval *cred; /* The bind credentials */
Slapi_Value **bvals;
struct dse *pdse;
Slapi_Attr *attr;
Slapi_DN *sdn = NULL;
Slapi_Entry *ec = NULL;
/*Get the parameters*/
if (slapi_pblock_get(pb, SLAPI_PLUGIN_PRIVATE, &pdse) < 0 ||
slapi_pblock_get(pb, SLAPI_BIND_TARGET_SDN, &sdn) < 0 ||
slapi_pblock_get(pb, SLAPI_BIND_METHOD, &method) < 0 ||
slapi_pblock_get(pb, SLAPI_BIND_CREDENTIALS, &cred) < 0) {
slapi_send_ldap_result(pb, LDAP_OPERATIONS_ERROR, NULL, NULL, 0, NULL);
return SLAPI_BIND_FAIL;
}
/* always allow noauth simple binds */
if (method == LDAP_AUTH_SIMPLE && cred->bv_len == 0) {
/*
* report success to client, but return
* SLAPI_BIND_FAIL so we don't
* authorize based on noauth credentials
*/
slapi_send_ldap_result(pb, LDAP_SUCCESS, NULL, NULL, 0, NULL);
return (SLAPI_BIND_FAIL);
}
ec = dse_get_entry_copy(pdse, sdn, DSE_USE_LOCK);
if (ec == NULL) {
slapi_send_ldap_result(pb, LDAP_NO_SUCH_OBJECT, NULL, NULL, 0, NULL);
return (SLAPI_BIND_FAIL);
}
switch (method) {
case LDAP_AUTH_SIMPLE: {
Slapi_Value cv;
if (slapi_entry_attr_find(ec, "userpassword", &attr) != 0) {
slapi_send_ldap_result(pb, LDAP_INAPPROPRIATE_AUTH, NULL, NULL, 0, NULL);
slapi_entry_free(ec);
return SLAPI_BIND_FAIL;
}
bvals = attr_get_present_values(attr);
slapi_value_init_berval(&cv, cred);
if (slapi_pw_find_sv(bvals, &cv) != 0) {
slapi_send_ldap_result(pb, LDAP_INVALID_CREDENTIALS, NULL, NULL, 0, NULL);
slapi_entry_free(ec);
value_done(&cv);
return SLAPI_BIND_FAIL;
}
value_done(&cv);
} break;
default:
slapi_send_ldap_result(pb, LDAP_STRONG_AUTH_NOT_SUPPORTED, NULL, "auth method not supported", 0, NULL);
slapi_entry_free(ec);
return SLAPI_BIND_FAIL;
}
slapi_entry_free(ec);
/* success: front end will send result */
return SLAPI_BIND_SUCCESS;
} | 336 | True | 1 |
CVE-2021-35525 | False | False | False | False | AV:N/AC:L/Au:N/C:N/I:N/A:P | NETWORK | LOW | NONE | NONE | NONE | PARTIAL | 5.0 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L | NETWORK | LOW | NONE | NONE | UNCHANGED | NONE | NONE | LOW | 5.3 | MEDIUM | 3.9 | 1.4 | False | [{'url': 'https://github.com/roehling/postsrsd/commit/077be98d8c8a9847e4ae0c7dc09e7474cbe27db2', 'name': 'https://github.com/roehling/postsrsd/commit/077be98d8c8a9847e4ae0c7dc09e7474cbe27db2', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://bugs.gentoo.org/793674', 'name': 'https://bugs.gentoo.org/793674', 'refsource': 'MISC', 'tags': ['Issue Tracking', 'Patch', 'Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/roehling/postsrsd/releases/tag/1.11', 'name': 'https://github.com/roehling/postsrsd/releases/tag/1.11', 'refsource': 'MISC', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://security.gentoo.org/glsa/202107-08', 'name': 'GLSA-202107-08', 'refsource': 'GENTOO', 'tags': ['Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'NVD-CWE-noinfo'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:postsrsd_project:postsrsd:*:*:*:*:*:*:*:*', 'versionEndExcluding': '1.11', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'PostSRSd before 1.11 allows a denial of service (subprocess hang) if Postfix sends certain long data fields such as multiple concatenated email addresses. NOTE: the PostSRSd maintainer acknowledges "theoretically, this error should never occur ... I\'m not sure if there\'s a reliable way to trigger this condition by an external attacker, but it is a security bug in PostSRSd nevertheless."'}] | 2021-09-20T18:52Z | 2021-06-28T18:15Z | Insufficient Information | There is insufficient information about the issue to classify it; details are unkown or unspecified. | Insufficient Information | https://nvd.nist.gov/vuln/categories | 0 | Timo Röhling | 2021-03-21 15:27:55+01:00 | SECURITY: Fix DoS on overly long input from Postfix
Thanks to Mateusz Jończyk who reported this issue and gave valuable
feedback for its resolution.
PostSRSd would hang on an overly long GET request, because the
fread()/fwrite() logic in the subprocess would get confused by the
remaining input line in its buffer.
Theoretically, this error should never occur, as Postfix is supposed to
send valid email addresses only, which are shorter than the buffer, even
assuming every single character is percent-encoded. However, Postfix
sometimes does seem to send malformed request with multiple concatenated
email addresses. I'm not sure if there's a reliable way to trigger this
condition by an external attacker, but it is a security bug in PostSRSd
nevertheless. | 077be98d8c8a9847e4ae0c7dc09e7474cbe27db2 | False | roehling/postsrsd | Postfix Sender Rewriting Scheme daemon | 2012-12-07 14:52:01 | 2022-08-27 23:31:07 | roehling | 272.0 | 30.0 | main | main( int argc , char ** argv) | ['argc', 'argv'] | int main(int argc, char **argv)
{
int opt, timeout = 1800, family = AF_UNSPEC, hashlength = 0, hashmin = 0;
int daemonize = FALSE, always_rewrite = FALSE;
char *listen_addr = NULL, *forward_service = NULL, *reverse_service = NULL,
*user = NULL, *domain = NULL, *chroot_dir = NULL;
char separator = '=';
char *secret_file = NULL, *pid_file = NULL;
FILE *pf = NULL, *sf = NULL;
struct passwd *pwd = NULL;
char secretbuf[1024], *secret = NULL;
char *tmp;
time_t now;
srs_t *srs;
const char **excludes;
size_t s1 = 0, s2 = 1;
struct pollfd fds[4];
size_t socket_count = 0, sc;
int sockets[4] = {-1, -1, -1, -1};
handle_t handler[4] = {0, 0, 0, 0};
int fd, maxfd;
excludes = (const char **)calloc(1, sizeof(char *));
tmp = strrchr(argv[0], '/');
if (tmp)
self = strdup(tmp + 1);
else
self = strdup(argv[0]);
while ((opt = getopt(argc, argv, "46d:a:l:f:r:s:n:N:u:t:p:c:X::ADhev"))
!= -1)
{
switch (opt)
{
case '?':
return EXIT_FAILURE;
case '4':
family = AF_INET;
break;
case '6':
family = AF_INET6;
break;
case 'd':
domain = strdup(optarg);
break;
case 'a':
separator = *optarg;
break;
case 'l':
listen_addr = strdup(optarg);
break;
case 'f':
forward_service = strdup(optarg);
break;
case 'r':
reverse_service = strdup(optarg);
break;
case 't':
timeout = atoi(optarg);
break;
case 's':
secret_file = strdup(optarg);
break;
case 'n':
hashlength = atoi(optarg);
break;
case 'N':
hashmin = atoi(optarg);
break;
case 'p':
pid_file = strdup(optarg);
break;
case 'u':
user = strdup(optarg);
break;
case 'c':
chroot_dir = strdup(optarg);
break;
case 'D':
daemonize = TRUE;
break;
case 'A':
always_rewrite = TRUE;
break;
case 'h':
show_help();
return EXIT_SUCCESS;
case 'X':
if (optarg != NULL)
{
tmp = strtok(optarg, ",; \t\r\n");
while (tmp)
{
if (s1 + 1 >= s2)
{
s2 *= 2;
excludes = (const char **)realloc(
excludes, s2 * sizeof(char *));
if (excludes == NULL)
{
fprintf(stderr, "%s: Out of memory\n\n", self);
return EXIT_FAILURE;
}
}
excludes[s1++] = strdup(tmp);
tmp = strtok(NULL, ",; \t\r\n");
}
excludes[s1] = NULL;
}
break;
case 'e':
if (getenv("SRS_DOMAIN") != NULL)
domain = strdup(getenv("SRS_DOMAIN"));
if (getenv("SRS_SEPARATOR") != NULL)
separator = *getenv("SRS_SEPARATOR");
if (getenv("SRS_HASHLENGTH") != NULL)
hashlength = atoi(getenv("SRS_HASHLENGTH"));
if (getenv("SRS_HASHMIN") != NULL)
hashmin = atoi(getenv("SRS_HASHMIN"));
if (getenv("SRS_FORWARD_PORT") != NULL)
forward_service = strdup(getenv("SRS_FORWARD_PORT"));
if (getenv("SRS_REVERSE_PORT") != NULL)
reverse_service = strdup(getenv("SRS_REVERSE_PORT"));
if (getenv("SRS_TIMEOUT") != NULL)
timeout = atoi(getenv("SRS_TIMEOUT"));
if (getenv("SRS_SECRET") != NULL)
secret_file = strdup(getenv("SRS_SECRET"));
if (getenv("SRS_PID_FILE") != NULL)
pid_file = strdup(getenv("SRS_PID_FILE"));
if (getenv("RUN_AS") != NULL)
user = strdup(getenv("RUN_AS"));
if (getenv("CHROOT") != NULL)
chroot_dir = strdup(getenv("CHROOT"));
if (getenv("SRS_EXCLUDE_DOMAINS") != NULL)
{
tmp = strtok(getenv("SRS_EXCLUDE_DOMAINS"), ",; \t\r\n");
while (tmp)
{
if (s1 + 1 >= s2)
{
s2 *= 2;
excludes = (const char **)realloc(
excludes, s2 * sizeof(char *));
if (excludes == NULL)
{
fprintf(stderr, "%s: Out of memory\n\n", self);
return EXIT_FAILURE;
}
}
excludes[s1++] = strdup(tmp);
tmp = strtok(NULL, ",; \t\r\n");
}
excludes[s1] = NULL;
}
break;
case 'v':
fprintf(stdout, "%s\n", POSTSRSD_VERSION);
return EXIT_SUCCESS;
}
}
if (optind < argc)
{
fprintf(stderr, "%s: extra argument on command line: %s\n", self,
argv[optind]);
return EXIT_FAILURE;
}
if (domain == NULL || *domain == 0)
{
fprintf(stderr, "%s: You must set a home domain (-d)\n", self);
return EXIT_FAILURE;
}
if (separator != '=' && separator != '+' && separator != '-')
{
fprintf(stderr, "%s: SRS separator character must be one of '=+-'\n",
self);
return EXIT_FAILURE;
}
if (forward_service == NULL)
forward_service = strdup("10001");
if (reverse_service == NULL)
reverse_service = strdup("10002");
/* Close all file descriptors (std ones will be closed later). */
maxfd = sysconf(_SC_OPEN_MAX);
for (fd = 3; fd < maxfd; fd++)
close(fd);
/* The stuff we do first may not be possible from within chroot or without
* privileges */
/* Open pid file for writing (the actual process ID is filled in later) */
if (pid_file)
{
pf = fopen(pid_file, "w");
if (pf == NULL)
{
fprintf(stderr, "%s: Cannot write PID: %s\n\n", self, pid_file);
return EXIT_FAILURE;
}
}
/* Read secret. The default installation makes this root accessible only. */
if (secret_file != NULL)
{
sf = fopen(secret_file, "rb");
if (sf == NULL)
{
fprintf(stderr, "%s: Cannot open file with secret: %s\n", self,
secret_file);
return EXIT_FAILURE;
}
}
else
{
fprintf(stderr, "%s: You must set a secret (-s)\n", self);
return EXIT_FAILURE;
}
/* Bind ports. May require privileges if the config specifies ports below
* 1024 */
sc = bind_service(listen_addr, forward_service, family,
&sockets[socket_count], 4 - socket_count);
if (sc == 0)
return EXIT_FAILURE;
while (sc-- > 0)
handler[socket_count++] = handle_forward;
free(forward_service);
sc = bind_service(listen_addr, reverse_service, family,
&sockets[socket_count], 4 - socket_count);
if (sc == 0)
return EXIT_FAILURE;
while (sc-- > 0)
handler[socket_count++] = handle_reverse;
free(reverse_service);
/* Open syslog now (NDELAY), because it may no longer be reachable from
* chroot */
openlog(self, LOG_PID | LOG_NDELAY, LOG_MAIL);
/* Force loading of timezone info (suggested by patrickdk77) */
now = time(NULL);
localtime(&now);
/* We also have to lookup the uid of the unprivileged user before the
* chroot. */
if (user)
{
errno = 0;
pwd = getpwnam(user);
if (pwd == NULL)
{
if (errno != 0)
fprintf(stderr, "%s: Failed to lookup user: %s\n", self,
strerror(errno));
else
fprintf(stderr, "%s: No such user: %s\n", self, user);
return EXIT_FAILURE;
}
}
/* Now we can chroot, which again requires root privileges */
if (chroot_dir)
{
if (chdir(chroot_dir) < 0)
{
fprintf(stderr, "%s: Cannot change to chroot: %s\n", self,
strerror(errno));
return EXIT_FAILURE;
}
if (chroot(chroot_dir) < 0)
{
fprintf(stderr, "%s: Failed to enable chroot: %s\n", self,
strerror(errno));
return EXIT_FAILURE;
}
}
/* Finally, we revert to the unprivileged user */
if (pwd)
{
if (setgid(pwd->pw_gid) < 0)
{
fprintf(stderr, "%s: Failed to switch group id: %s\n", self,
strerror(errno));
return EXIT_FAILURE;
}
if (setuid(pwd->pw_uid) < 0)
{
fprintf(stderr, "%s: Failed to switch user id: %s\n", self,
strerror(errno));
return EXIT_FAILURE;
}
}
/* Standard double fork technique to disavow all knowledge about the
* controlling terminal */
if (daemonize)
{
close(0);
close(1);
close(2);
if (fork() != 0)
return EXIT_SUCCESS;
setsid();
if (fork() != 0)
return EXIT_SUCCESS;
}
/* Make note of our actual process ID */
if (pf)
{
fprintf(pf, "%d", (int)getpid());
fclose(pf);
}
srs = srs_new();
while ((secret = fgets(secretbuf, sizeof(secretbuf), sf)))
{
secret = strtok(secret, "\r\n");
if (secret)
srs_add_secret(srs, secret);
}
fclose(sf);
srs_set_alwaysrewrite(srs, always_rewrite);
srs_set_separator(srs, separator);
if (hashlength)
srs_set_hashlength(srs, hashlength);
if (hashmin)
srs_set_hashmin(srs, hashmin);
for (sc = 0; sc < socket_count; ++sc)
{
fds[sc].fd = sockets[sc];
fds[sc].events = POLLIN;
}
while (TRUE)
{
int conn;
FILE *fp;
char linebuf[1024], *line;
char keybuf[1024], *key;
if (poll(fds, socket_count, 1000) < 0)
{
if (errno == EINTR)
continue;
if (daemonize)
syslog(LOG_MAIL | LOG_ERR, "Poll failure: %s", strerror(errno));
else
fprintf(stderr, "%s: Poll failure: %s\n", self,
strerror(errno));
return EXIT_FAILURE;
}
for (sc = 0; sc < socket_count; ++sc)
{
if (fds[sc].revents)
{
conn = accept(fds[sc].fd, NULL, NULL);
if (conn < 0)
continue;
if (fork() == 0)
{
int i;
/* close listen sockets so that we don't stop the main
* daemon process from restarting */
for (i = 0; i < socket_count; ++i)
close(sockets[i]);
fp = fdopen(conn, "r+");
if (fp == NULL)
exit(EXIT_FAILURE);
fds[0].fd = conn;
fds[0].events = POLLIN;
if (poll(fds, 1, timeout * 1000) <= 0)
return EXIT_FAILURE;
line = fgets(linebuf, sizeof(linebuf), fp);
while (line)
{
fseek(fp, 0, SEEK_CUR); /* Workaround for Solaris */
char *token;
token = strtok(line, " \r\n");
if (token == NULL || strcmp(token, "get") != 0)
{
fprintf(fp, "500 Invalid request\n");
fflush(fp);
return EXIT_FAILURE;
}
token = strtok(NULL, "\r\n");
if (!token)
{
fprintf(fp, "500 Invalid request\n");
fflush(fp);
return EXIT_FAILURE;
}
key = url_decode(keybuf, sizeof(keybuf), token);
if (!key)
{
fprintf(fp, "500 Invalid request\n");
fflush(fp);
return EXIT_FAILURE;
}
handler[sc](srs, fp, key, domain, excludes);
fflush(fp);
if (poll(fds, 1, timeout * 1000) <= 0)
break;
line = fgets(linebuf, sizeof(linebuf), fp);
}
fclose(fp);
return EXIT_SUCCESS;
}
close(conn);
}
}
waitpid(-1, NULL, WNOHANG);
}
return EXIT_SUCCESS;
} | 2093 | True | 1 |
|
CVE-2020-35605 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/kovidgoyal/kitty/issues/3128', 'name': 'https://github.com/kovidgoyal/kitty/issues/3128', 'refsource': 'MISC', 'tags': ['Exploit', 'Issue Tracking', 'Third Party Advisory']}, {'url': 'https://github.com/kovidgoyal/kitty/commit/82c137878c2b99100a3cdc1c0f0efea069313901', 'name': 'https://github.com/kovidgoyal/kitty/commit/82c137878c2b99100a3cdc1c0f0efea069313901', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://www.debian.org/security/2020/dsa-4819', 'name': 'DSA-4819', 'refsource': 'DEBIAN', 'tags': []}] | [{'description': [{'lang': 'en', 'value': 'NVD-CWE-Other'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:kitty_project:kitty:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.19.3', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'The Graphics Protocol feature in graphics.c in kitty before 0.19.3 allows remote attackers to execute arbitrary code because a filename containing special characters can be included in an error message.'}] | 2020-12-27T16:15Z | 2020-12-21T20:15Z | Other | NVD is only using a subset of CWE for mapping instead of the entire CWE, and the weakness type is not covered by that subset. | Insufficient Information | https://nvd.nist.gov/vuln/categories | 0 | Kovid Goyal | 2020-11-29 11:50:14+05:18 | Graphics protocol: Dont return filename in the error message when opening file fails, since filenames can contain control characters
Fixes #3128 | 82c137878c2b99100a3cdc1c0f0efea069313901 | False | kovidgoyal/kitty | Cross-platform, fast, feature-rich, GPU based terminal | 2016-10-16 14:48:28 | 2022-08-27 08:11:34 | https://sw.kovidgoyal.net/kitty/ | kovidgoyal | 15797.0 | 743.0 | handle_add_command | handle_add_command( GraphicsManager * self , const GraphicsCommand * g , const uint8_t * payload , bool * is_dirty , uint32_t iid) | ['self', 'g', 'payload', 'is_dirty', 'iid'] | handle_add_command(GraphicsManager *self, const GraphicsCommand *g, const uint8_t *payload, bool *is_dirty, uint32_t iid) {
#define ABRT(code, ...) { set_add_response(#code, __VA_ARGS__); self->loading_image = 0; if (img) img->data_loaded = false; return NULL; }
#define MAX_DATA_SZ (4u * 100000000u)
has_add_respose = false;
bool existing, init_img = true;
Image *img = NULL;
unsigned char tt = g->transmission_type ? g->transmission_type : 'd';
enum FORMATS { RGB=24, RGBA=32, PNG=100 };
uint32_t fmt = g->format ? g->format : RGBA;
if (tt == 'd' && self->loading_image) init_img = false;
if (init_img) {
self->last_init_graphics_command = *g;
self->last_init_graphics_command.id = iid;
self->loading_image = 0;
if (g->data_width > 10000 || g->data_height > 10000) ABRT(EINVAL, "Image too large");
remove_images(self, add_trim_predicate, 0);
img = find_or_create_image(self, iid, &existing);
if (existing) {
free_load_data(&img->load_data);
img->data_loaded = false;
free_refs_data(img);
*is_dirty = true;
self->layers_dirty = true;
} else {
img->internal_id = internal_id_counter++;
img->client_id = iid;
}
img->atime = monotonic(); img->used_storage = 0;
img->width = g->data_width; img->height = g->data_height;
switch(fmt) {
case PNG:
if (g->data_sz > MAX_DATA_SZ) ABRT(EINVAL, "PNG data size too large");
img->load_data.is_4byte_aligned = true;
img->load_data.is_opaque = false;
img->load_data.data_sz = g->data_sz ? g->data_sz : 1024 * 100;
break;
case RGB:
case RGBA:
img->load_data.data_sz = (size_t)g->data_width * g->data_height * (fmt / 8);
if (!img->load_data.data_sz) ABRT(EINVAL, "Zero width/height not allowed");
img->load_data.is_4byte_aligned = fmt == RGBA || (img->width % 4 == 0);
img->load_data.is_opaque = fmt == RGB;
break;
default:
ABRT(EINVAL, "Unknown image format: %u", fmt);
}
if (tt == 'd') {
if (g->more) self->loading_image = img->internal_id;
img->load_data.buf_capacity = img->load_data.data_sz + (g->compressed ? 1024 : 10); // compression header
img->load_data.buf = malloc(img->load_data.buf_capacity);
img->load_data.buf_used = 0;
if (img->load_data.buf == NULL) {
ABRT(ENOMEM, "Out of memory");
img->load_data.buf_capacity = 0; img->load_data.buf_used = 0;
}
}
} else {
self->last_init_graphics_command.more = g->more;
self->last_init_graphics_command.payload_sz = g->payload_sz;
g = &self->last_init_graphics_command;
tt = g->transmission_type ? g->transmission_type : 'd';
fmt = g->format ? g->format : RGBA;
img = img_by_internal_id(self, self->loading_image);
if (img == NULL) {
self->loading_image = 0;
ABRT(EILSEQ, "More payload loading refers to non-existent image");
}
}
int fd;
static char fname[2056] = {0};
switch(tt) {
case 'd': // direct
if (img->load_data.buf_capacity - img->load_data.buf_used < g->payload_sz) {
if (img->load_data.buf_used + g->payload_sz > MAX_DATA_SZ || fmt != PNG) ABRT(EFBIG, "Too much data");
img->load_data.buf_capacity = MIN(2 * img->load_data.buf_capacity, MAX_DATA_SZ);
img->load_data.buf = realloc(img->load_data.buf, img->load_data.buf_capacity);
if (img->load_data.buf == NULL) {
ABRT(ENOMEM, "Out of memory");
img->load_data.buf_capacity = 0; img->load_data.buf_used = 0;
}
}
memcpy(img->load_data.buf + img->load_data.buf_used, payload, g->payload_sz);
img->load_data.buf_used += g->payload_sz;
if (!g->more) { img->data_loaded = true; self->loading_image = 0; }
break;
case 'f': // file
case 't': // temporary file
case 's': // POSIX shared memory
if (g->payload_sz > 2048) ABRT(EINVAL, "Filename too long");
snprintf(fname, sizeof(fname)/sizeof(fname[0]), "%.*s", (int)g->payload_sz, payload);
if (tt == 's') fd = shm_open(fname, O_RDONLY, 0);
else fd = open(fname, O_CLOEXEC | O_RDONLY);
if (fd == -1) ABRT(EBADF, "Failed to open file %s for graphics transmission with error: [%d] %s", fname, errno, strerror(errno));
img->data_loaded = mmap_img_file(self, img, fd, g->data_sz, g->data_offset);
safe_close(fd, __FILE__, __LINE__);
if (tt == 't') {
if (global_state.boss) { call_boss(safe_delete_temp_file, "s", fname); }
else unlink(fname);
}
else if (tt == 's') shm_unlink(fname);
break;
default:
ABRT(EINVAL, "Unknown transmission type: %c", g->transmission_type);
}
if (!img->data_loaded) return NULL;
self->loading_image = 0;
bool needs_processing = g->compressed || fmt == PNG;
if (needs_processing) {
uint8_t *buf; size_t bufsz;
#define IB { if (img->load_data.buf) { buf = img->load_data.buf; bufsz = img->load_data.buf_used; } else { buf = img->load_data.mapped_file; bufsz = img->load_data.mapped_file_sz; } }
switch(g->compressed) {
case 'z':
IB;
if (!inflate_zlib(self, img, buf, bufsz)) {
img->data_loaded = false; return NULL;
}
break;
case 0:
break;
default:
ABRT(EINVAL, "Unknown image compression: %c", g->compressed);
}
switch(fmt) {
case PNG:
IB;
if (!inflate_png(self, img, buf, bufsz)) {
img->data_loaded = false; return NULL;
}
break;
default: break;
}
#undef IB
img->load_data.data = img->load_data.buf;
if (img->load_data.buf_used < img->load_data.data_sz) {
ABRT(ENODATA, "Insufficient image data: %zu < %zu", img->load_data.buf_used, img->load_data.data_sz);
}
if (img->load_data.mapped_file) {
munmap(img->load_data.mapped_file, img->load_data.mapped_file_sz);
img->load_data.mapped_file = NULL; img->load_data.mapped_file_sz = 0;
}
} else {
if (tt == 'd') {
if (img->load_data.buf_used < img->load_data.data_sz) {
ABRT(ENODATA, "Insufficient image data: %zu < %zu", img->load_data.buf_used, img->load_data.data_sz);
} else img->load_data.data = img->load_data.buf;
} else {
if (img->load_data.mapped_file_sz < img->load_data.data_sz) {
ABRT(ENODATA, "Insufficient image data: %zu < %zu", img->load_data.mapped_file_sz, img->load_data.data_sz);
} else img->load_data.data = img->load_data.mapped_file;
}
}
size_t required_sz = (size_t)(img->load_data.is_opaque ? 3 : 4) * img->width * img->height;
if (img->load_data.data_sz != required_sz) ABRT(EINVAL, "Image dimensions: %ux%u do not match data size: %zu, expected size: %zu", img->width, img->height, img->load_data.data_sz, required_sz);
if (LIKELY(img->data_loaded && send_to_gpu)) {
send_image_to_gpu(&img->texture_id, img->load_data.data, img->width, img->height, img->load_data.is_opaque, img->load_data.is_4byte_aligned, false, REPEAT_CLAMP);
free_load_data(&img->load_data);
self->used_storage += required_sz;
img->used_storage = required_sz;
}
return img;
#undef MAX_DATA_SZ
#undef ABRT
} | 1447 | True | 1 |
CVE-2020-35963 | False | False | False | True | AV:N/AC:M/Au:N/C:P/I:P/A:P | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | PARTIAL | 6.8 | CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H | LOCAL | LOW | NONE | REQUIRED | UNCHANGED | HIGH | HIGH | HIGH | 7.8 | HIGH | 1.8 | 5.9 | False | [{'url': 'https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=27261', 'name': 'https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=27261', 'refsource': 'MISC', 'tags': ['Exploit', 'Issue Tracking', 'Third Party Advisory']}, {'url': 'https://fluentbit.io/announcements/v1.6.4/', 'name': 'https://fluentbit.io/announcements/v1.6.4/', 'refsource': 'MISC', 'tags': ['Release Notes', 'Vendor Advisory']}, {'url': 'https://github.com/fluent/fluent-bit/commit/cadff53c093210404aed01c4cf586adb8caa07af', 'name': 'https://github.com/fluent/fluent-bit/commit/cadff53c093210404aed01c4cf586adb8caa07af', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'AND', 'children': [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:treasuredata:fluent_bit:*:*:*:*:*:*:*:*', 'versionEndExcluding': '1.6.4', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': False, 'cpe23Uri': 'cpe:2.3:o:linux:linux_kernel:-:*:*:*:*:*:*:*', 'cpe_name': []}]}], 'cpe_match': []}] | [{'lang': 'en', 'value': 'flb_gzip_compress in flb_gzip.c in Fluent Bit before 1.6.4 has an out-of-bounds write because it does not use the correct calculation of the maximum gzip data-size expansion.'}] | 2021-01-08T13:45Z | 2021-01-03T19:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | davkor | 2020-11-07 13:59:02+00:00 | gzip: fix compression size calculation (oss-fuzz 27261)
Signed-off-by: davkor <david@adalogics.com> | cadff53c093210404aed01c4cf586adb8caa07af | False | fluent/fluent-bit | Fast and Lightweight Logs and Metrics processor for Linux, BSD, OSX and Windows | 2015-01-27 20:41:52 | 2022-08-28 00:26:42 | https://fluentbit.io | fluent | 3857.0 | 1104.0 | flb_gzip_compress | flb_gzip_compress( void * in_data , size_t in_len , void ** out_data , size_t * out_len) | ['in_data', 'in_len', 'out_data', 'out_len'] | int flb_gzip_compress(void *in_data, size_t in_len,
void **out_data, size_t *out_len)
{
int flush;
int status;
int footer_start;
uint8_t *pb;
size_t out_size;
void *out_buf;
z_stream strm;
mz_ulong crc;
out_size = in_len + 32;
out_buf = flb_malloc(out_size);
if (!out_buf) {
flb_errno();
flb_error("[gzip] could not allocate outgoing buffer");
return -1;
}
/* Initialize streaming buffer context */
memset(&strm, '\0', sizeof(strm));
strm.zalloc = Z_NULL;
strm.zfree = Z_NULL;
strm.opaque = Z_NULL;
strm.next_in = in_data;
strm.avail_in = in_len;
strm.total_out = 0;
/* Deflate mode */
deflateInit2(&strm, Z_DEFAULT_COMPRESSION,
Z_DEFLATED, -Z_DEFAULT_WINDOW_BITS, 9, Z_DEFAULT_STRATEGY);
/*
* Miniz don't support GZip format directly, instead we will:
*
* - append manual GZip magic bytes
* - deflate raw content
* - append manual CRC32 data
*/
gzip_header(out_buf);
/* Header offset */
pb = (uint8_t *) out_buf + FLB_GZIP_HEADER_OFFSET;
flush = Z_NO_FLUSH;
while (1) {
strm.next_out = pb + strm.total_out;
strm.avail_out = out_size - (pb - (uint8_t *) out_buf);
if (strm.avail_in == 0) {
flush = Z_FINISH;
}
status = deflate(&strm, flush);
if (status == Z_STREAM_END) {
break;
}
else if (status != Z_OK) {
deflateEnd(&strm);
return -1;
}
}
if (deflateEnd(&strm) != Z_OK) {
flb_free(out_buf);
return -1;
}
*out_len = strm.total_out;
/* Construct the gzip checksum (CRC32 footer) */
footer_start = FLB_GZIP_HEADER_OFFSET + *out_len;
pb = (uint8_t *) out_buf + footer_start;
crc = mz_crc32(MZ_CRC32_INIT, in_data, in_len);
*pb++ = crc & 0xFF;
*pb++ = (crc >> 8) & 0xFF;
*pb++ = (crc >> 16) & 0xFF;
*pb++ = (crc >> 24) & 0xFF;
*pb++ = in_len & 0xFF;
*pb++ = (in_len >> 8) & 0xFF;
*pb++ = (in_len >> 16) & 0xFF;
*pb++ = (in_len >> 24) & 0xFF;
/* Set the real buffer size for the caller */
*out_len += FLB_GZIP_HEADER_OFFSET + 8;
*out_data = out_buf;
return 0;
} | 413 | True | 1 |
CVE-2020-36315 | False | False | False | False | AV:N/AC:L/Au:N/C:N/I:P/A:N | NETWORK | LOW | NONE | NONE | PARTIAL | NONE | 5.0 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:L/A:N | NETWORK | LOW | NONE | NONE | UNCHANGED | NONE | LOW | NONE | 5.3 | MEDIUM | 3.9 | 1.4 | False | [{'url': 'https://github.com/relic-toolkit/relic/commit/76c9a1fdf19d9e92e566a77376673e522aae9f80', 'name': 'https://github.com/relic-toolkit/relic/commit/76c9a1fdf19d9e92e566a77376673e522aae9f80', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/relic-toolkit/relic/tree/32eb4c257fc80328061d66639b1cdb35dbed51a2', 'name': 'https://github.com/relic-toolkit/relic/tree/32eb4c257fc80328061d66639b1cdb35dbed51a2', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/relic-toolkit/relic/', 'name': 'https://github.com/relic-toolkit/relic/', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/relic-toolkit/relic/issues/154', 'name': 'https://github.com/relic-toolkit/relic/issues/154', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-327'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:relic_project:relic:*:*:*:*:*:*:*:*', 'versionEndExcluding': '2020-08-01', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'In RELIC before 2020-08-01, RSA PKCS#1 v1.5 signature forgery can occur because certain checks of the padding (and of the first two bytes) are inadequate. NOTE: this requires that a low public exponent (such as 3) is being used. The product, by default, does not generate RSA keys with such a low number.'}] | 2022-07-12T17:42Z | 2021-04-07T21:15Z | Use of a Broken or Risky Cryptographic Algorithm | The use of a broken or risky cryptographic algorithm is an unnecessary risk that may result in the exposure of sensitive information. | The use of a non-standard algorithm is dangerous because a determined attacker may be able to break the algorithm and compromise whatever data has been protected. Well-known techniques may exist to break the algorithm.
| https://cwe.mitre.org/data/definitions/327.html | 0 | Diego F. Aranha | 2020-08-02 01:53:19+02:00 | Fix #154 and #155 by inverting the padding check logic and being more rigorous. | 76c9a1fdf19d9e92e566a77376673e522aae9f80 | False | relic-toolkit/relic | Code | 2014-08-18 21:34:41 | 2022-08-14 00:03:59 | relic-toolkit | 352.0 | 145.0 | pad_basic | pad_basic( bn_t m , int * p_len , int m_len , int k_len , int operation) | ['m', 'p_len', 'm_len', 'k_len', 'operation'] | static int pad_basic(bn_t m, int *p_len, int m_len, int k_len, int operation) {
uint8_t pad = 0;
int result = RLC_OK;
bn_t t;
RLC_TRY {
bn_null(t);
bn_new(t);
switch (operation) {
case RSA_ENC:
case RSA_SIG:
case RSA_SIG_HASH:
/* EB = 00 | FF | D. */
bn_zero(m);
bn_lsh(m, m, 8);
bn_add_dig(m, m, RSA_PAD);
/* Make room for the real message. */
bn_lsh(m, m, m_len * 8);
break;
case RSA_DEC:
case RSA_VER:
case RSA_VER_HASH:
/* EB = 00 | FF | D. */
m_len = k_len - 1;
bn_rsh(t, m, 8 * m_len);
if (!bn_is_zero(t)) {
result = RLC_ERR;
}
*p_len = 1;
do {
(*p_len)++;
m_len--;
bn_rsh(t, m, 8 * m_len);
pad = (uint8_t)t->dp[0];
} while (pad == 0 && m_len > 0);
if (pad != RSA_PAD) {
result = RLC_ERR;
}
bn_mod_2b(m, m, (k_len - *p_len) * 8);
break;
}
}
RLC_CATCH_ANY {
result = RLC_ERR;
}
RLC_FINALLY {
bn_free(t);
}
return result;
} | 236 | True | 1 |
|
CVE-2020-36316 | False | False | False | True | AV:N/AC:M/Au:N/C:N/I:N/A:P | NETWORK | MEDIUM | NONE | NONE | NONE | PARTIAL | 4.3 | CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:N/I:N/A:H | LOCAL | LOW | NONE | REQUIRED | UNCHANGED | NONE | NONE | HIGH | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/relic-toolkit/relic/commit/76c9a1fdf19d9e92e566a77376673e522aae9f80', 'name': 'https://github.com/relic-toolkit/relic/commit/76c9a1fdf19d9e92e566a77376673e522aae9f80', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/relic-toolkit/relic/tree/32eb4c257fc80328061d66639b1cdb35dbed51a2', 'name': 'https://github.com/relic-toolkit/relic/tree/32eb4c257fc80328061d66639b1cdb35dbed51a2', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/relic-toolkit/relic/', 'name': 'https://github.com/relic-toolkit/relic/', 'refsource': 'MISC', 'tags': ['Product', 'Third Party Advisory']}, {'url': 'https://github.com/relic-toolkit/relic/issues/155', 'name': 'https://github.com/relic-toolkit/relic/issues/155', 'refsource': 'MISC', 'tags': ['Exploit', 'Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:relic_project:relic:*:*:*:*:*:*:*:*', 'versionEndExcluding': '2021-04-03', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'In RELIC before 2021-04-03, there is a buffer overflow in PKCS#1 v1.5 signature verification because garbage bytes can be present.'}] | 2021-04-16T13:55Z | 2021-04-07T21:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Diego F. Aranha | 2020-08-02 01:53:19+02:00 | Fix #154 and #155 by inverting the padding check logic and being more rigorous. | 76c9a1fdf19d9e92e566a77376673e522aae9f80 | False | relic-toolkit/relic | Code | 2014-08-18 21:34:41 | 2022-08-14 00:03:59 | relic-toolkit | 352.0 | 145.0 | pad_basic | pad_basic( bn_t m , int * p_len , int m_len , int k_len , int operation) | ['m', 'p_len', 'm_len', 'k_len', 'operation'] | static int pad_basic(bn_t m, int *p_len, int m_len, int k_len, int operation) {
uint8_t pad = 0;
int result = RLC_OK;
bn_t t;
RLC_TRY {
bn_null(t);
bn_new(t);
switch (operation) {
case RSA_ENC:
case RSA_SIG:
case RSA_SIG_HASH:
/* EB = 00 | FF | D. */
bn_zero(m);
bn_lsh(m, m, 8);
bn_add_dig(m, m, RSA_PAD);
/* Make room for the real message. */
bn_lsh(m, m, m_len * 8);
break;
case RSA_DEC:
case RSA_VER:
case RSA_VER_HASH:
/* EB = 00 | FF | D. */
m_len = k_len - 1;
bn_rsh(t, m, 8 * m_len);
if (!bn_is_zero(t)) {
result = RLC_ERR;
}
*p_len = 1;
do {
(*p_len)++;
m_len--;
bn_rsh(t, m, 8 * m_len);
pad = (uint8_t)t->dp[0];
} while (pad == 0 && m_len > 0);
if (pad != RSA_PAD) {
result = RLC_ERR;
}
bn_mod_2b(m, m, (k_len - *p_len) * 8);
break;
}
}
RLC_CATCH_ANY {
result = RLC_ERR;
}
RLC_FINALLY {
bn_free(t);
}
return result;
} | 236 | True | 1 |
|
CVE-2020-36315 | False | False | False | False | AV:N/AC:L/Au:N/C:N/I:P/A:N | NETWORK | LOW | NONE | NONE | PARTIAL | NONE | 5.0 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:L/A:N | NETWORK | LOW | NONE | NONE | UNCHANGED | NONE | LOW | NONE | 5.3 | MEDIUM | 3.9 | 1.4 | False | [{'url': 'https://github.com/relic-toolkit/relic/commit/76c9a1fdf19d9e92e566a77376673e522aae9f80', 'name': 'https://github.com/relic-toolkit/relic/commit/76c9a1fdf19d9e92e566a77376673e522aae9f80', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/relic-toolkit/relic/tree/32eb4c257fc80328061d66639b1cdb35dbed51a2', 'name': 'https://github.com/relic-toolkit/relic/tree/32eb4c257fc80328061d66639b1cdb35dbed51a2', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/relic-toolkit/relic/', 'name': 'https://github.com/relic-toolkit/relic/', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/relic-toolkit/relic/issues/154', 'name': 'https://github.com/relic-toolkit/relic/issues/154', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-327'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:relic_project:relic:*:*:*:*:*:*:*:*', 'versionEndExcluding': '2020-08-01', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'In RELIC before 2020-08-01, RSA PKCS#1 v1.5 signature forgery can occur because certain checks of the padding (and of the first two bytes) are inadequate. NOTE: this requires that a low public exponent (such as 3) is being used. The product, by default, does not generate RSA keys with such a low number.'}] | 2022-07-12T17:42Z | 2021-04-07T21:15Z | Use of a Broken or Risky Cryptographic Algorithm | The use of a broken or risky cryptographic algorithm is an unnecessary risk that may result in the exposure of sensitive information. | The use of a non-standard algorithm is dangerous because a determined attacker may be able to break the algorithm and compromise whatever data has been protected. Well-known techniques may exist to break the algorithm.
| https://cwe.mitre.org/data/definitions/327.html | 0 | Diego F. Aranha | 2020-08-02 01:53:19+02:00 | Fix #154 and #155 by inverting the padding check logic and being more rigorous. | 76c9a1fdf19d9e92e566a77376673e522aae9f80 | False | relic-toolkit/relic | Code | 2014-08-18 21:34:41 | 2022-08-14 00:03:59 | relic-toolkit | 352.0 | 145.0 | pad_pkcs1 | pad_pkcs1( bn_t m , int * p_len , int m_len , int k_len , int operation) | ['m', 'p_len', 'm_len', 'k_len', 'operation'] | static int pad_pkcs1(bn_t m, int *p_len, int m_len, int k_len, int operation) {
uint8_t *id, pad = 0;
int len, result = RLC_OK;
bn_t t;
bn_null(t);
RLC_TRY {
bn_new(t);
switch (operation) {
case RSA_ENC:
/* EB = 00 | 02 | PS | 00 | D. */
bn_zero(m);
bn_lsh(m, m, 8);
bn_add_dig(m, m, RSA_PUB);
*p_len = k_len - 3 - m_len;
for (int i = 0; i < *p_len; i++) {
bn_lsh(m, m, 8);
do {
rand_bytes(&pad, 1);
} while (pad == 0);
bn_add_dig(m, m, pad);
}
bn_lsh(m, m, 8);
bn_add_dig(m, m, 0);
/* Make room for the real message. */
bn_lsh(m, m, m_len * 8);
break;
case RSA_DEC:
m_len = k_len - 1;
bn_rsh(t, m, 8 * m_len);
if (!bn_is_zero(t)) {
result = RLC_ERR;
}
*p_len = m_len;
m_len--;
bn_rsh(t, m, 8 * m_len);
pad = (uint8_t)t->dp[0];
if (pad != RSA_PUB) {
result = RLC_ERR;
}
do {
m_len--;
bn_rsh(t, m, 8 * m_len);
pad = (uint8_t)t->dp[0];
} while (pad != 0 && m_len > 0);
/* Remove padding and trailing zero. */
*p_len -= (m_len - 1);
bn_mod_2b(m, m, (k_len - *p_len) * 8);
break;
case RSA_SIG:
/* EB = 00 | 01 | PS | 00 | D. */
id = hash_id(MD_MAP, &len);
bn_zero(m);
bn_lsh(m, m, 8);
bn_add_dig(m, m, RSA_PRV);
*p_len = k_len - 3 - m_len - len;
for (int i = 0; i < *p_len; i++) {
bn_lsh(m, m, 8);
bn_add_dig(m, m, RSA_PAD);
}
bn_lsh(m, m, 8);
bn_add_dig(m, m, 0);
bn_lsh(m, m, 8 * len);
bn_read_bin(t, id, len);
bn_add(m, m, t);
/* Make room for the real message. */
bn_lsh(m, m, m_len * 8);
break;
case RSA_SIG_HASH:
/* EB = 00 | 01 | PS | 00 | D. */
bn_zero(m);
bn_lsh(m, m, 8);
bn_add_dig(m, m, RSA_PRV);
*p_len = k_len - 3 - m_len;
for (int i = 0; i < *p_len; i++) {
bn_lsh(m, m, 8);
bn_add_dig(m, m, RSA_PAD);
}
bn_lsh(m, m, 8);
bn_add_dig(m, m, 0);
/* Make room for the real message. */
bn_lsh(m, m, m_len * 8);
break;
case RSA_VER:
m_len = k_len - 1;
bn_rsh(t, m, 8 * m_len);
if (!bn_is_zero(t)) {
result = RLC_ERR;
}
m_len--;
bn_rsh(t, m, 8 * m_len);
pad = (uint8_t)t->dp[0];
if (pad != RSA_PRV) {
result = RLC_ERR;
}
do {
m_len--;
bn_rsh(t, m, 8 * m_len);
pad = (uint8_t)t->dp[0];
} while (pad != 0 && m_len > 0);
if (m_len == 0) {
result = RLC_ERR;
}
/* Remove padding and trailing zero. */
id = hash_id(MD_MAP, &len);
m_len -= len;
bn_rsh(t, m, m_len * 8);
int r = 0;
for (int i = 0; i < len; i++) {
pad = (uint8_t)t->dp[0];
r |= pad - id[len - i - 1];
bn_rsh(t, t, 8);
}
*p_len = k_len - m_len;
bn_mod_2b(m, m, m_len * 8);
result = (r == 0 ? RLC_OK : RLC_ERR);
break;
case RSA_VER_HASH:
m_len = k_len - 1;
bn_rsh(t, m, 8 * m_len);
if (!bn_is_zero(t)) {
result = RLC_ERR;
}
m_len--;
bn_rsh(t, m, 8 * m_len);
pad = (uint8_t)t->dp[0];
if (pad != RSA_PRV) {
result = RLC_ERR;
}
do {
m_len--;
bn_rsh(t, m, 8 * m_len);
pad = (uint8_t)t->dp[0];
} while (pad != 0 && m_len > 0);
if (m_len == 0) {
result = RLC_ERR;
}
/* Remove padding and trailing zero. */
*p_len = k_len - m_len;
bn_mod_2b(m, m, m_len * 8);
break;
}
}
RLC_CATCH_ANY {
result = RLC_ERR;
}
RLC_FINALLY {
bn_free(t);
}
return result;
} | 961 | True | 1 |
|
CVE-2020-36316 | False | False | False | True | AV:N/AC:M/Au:N/C:N/I:N/A:P | NETWORK | MEDIUM | NONE | NONE | NONE | PARTIAL | 4.3 | CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:N/I:N/A:H | LOCAL | LOW | NONE | REQUIRED | UNCHANGED | NONE | NONE | HIGH | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/relic-toolkit/relic/commit/76c9a1fdf19d9e92e566a77376673e522aae9f80', 'name': 'https://github.com/relic-toolkit/relic/commit/76c9a1fdf19d9e92e566a77376673e522aae9f80', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/relic-toolkit/relic/tree/32eb4c257fc80328061d66639b1cdb35dbed51a2', 'name': 'https://github.com/relic-toolkit/relic/tree/32eb4c257fc80328061d66639b1cdb35dbed51a2', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/relic-toolkit/relic/', 'name': 'https://github.com/relic-toolkit/relic/', 'refsource': 'MISC', 'tags': ['Product', 'Third Party Advisory']}, {'url': 'https://github.com/relic-toolkit/relic/issues/155', 'name': 'https://github.com/relic-toolkit/relic/issues/155', 'refsource': 'MISC', 'tags': ['Exploit', 'Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:relic_project:relic:*:*:*:*:*:*:*:*', 'versionEndExcluding': '2021-04-03', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'In RELIC before 2021-04-03, there is a buffer overflow in PKCS#1 v1.5 signature verification because garbage bytes can be present.'}] | 2021-04-16T13:55Z | 2021-04-07T21:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Diego F. Aranha | 2020-08-02 01:53:19+02:00 | Fix #154 and #155 by inverting the padding check logic and being more rigorous. | 76c9a1fdf19d9e92e566a77376673e522aae9f80 | False | relic-toolkit/relic | Code | 2014-08-18 21:34:41 | 2022-08-14 00:03:59 | relic-toolkit | 352.0 | 145.0 | pad_pkcs1 | pad_pkcs1( bn_t m , int * p_len , int m_len , int k_len , int operation) | ['m', 'p_len', 'm_len', 'k_len', 'operation'] | static int pad_pkcs1(bn_t m, int *p_len, int m_len, int k_len, int operation) {
uint8_t *id, pad = 0;
int len, result = RLC_OK;
bn_t t;
bn_null(t);
RLC_TRY {
bn_new(t);
switch (operation) {
case RSA_ENC:
/* EB = 00 | 02 | PS | 00 | D. */
bn_zero(m);
bn_lsh(m, m, 8);
bn_add_dig(m, m, RSA_PUB);
*p_len = k_len - 3 - m_len;
for (int i = 0; i < *p_len; i++) {
bn_lsh(m, m, 8);
do {
rand_bytes(&pad, 1);
} while (pad == 0);
bn_add_dig(m, m, pad);
}
bn_lsh(m, m, 8);
bn_add_dig(m, m, 0);
/* Make room for the real message. */
bn_lsh(m, m, m_len * 8);
break;
case RSA_DEC:
m_len = k_len - 1;
bn_rsh(t, m, 8 * m_len);
if (!bn_is_zero(t)) {
result = RLC_ERR;
}
*p_len = m_len;
m_len--;
bn_rsh(t, m, 8 * m_len);
pad = (uint8_t)t->dp[0];
if (pad != RSA_PUB) {
result = RLC_ERR;
}
do {
m_len--;
bn_rsh(t, m, 8 * m_len);
pad = (uint8_t)t->dp[0];
} while (pad != 0 && m_len > 0);
/* Remove padding and trailing zero. */
*p_len -= (m_len - 1);
bn_mod_2b(m, m, (k_len - *p_len) * 8);
break;
case RSA_SIG:
/* EB = 00 | 01 | PS | 00 | D. */
id = hash_id(MD_MAP, &len);
bn_zero(m);
bn_lsh(m, m, 8);
bn_add_dig(m, m, RSA_PRV);
*p_len = k_len - 3 - m_len - len;
for (int i = 0; i < *p_len; i++) {
bn_lsh(m, m, 8);
bn_add_dig(m, m, RSA_PAD);
}
bn_lsh(m, m, 8);
bn_add_dig(m, m, 0);
bn_lsh(m, m, 8 * len);
bn_read_bin(t, id, len);
bn_add(m, m, t);
/* Make room for the real message. */
bn_lsh(m, m, m_len * 8);
break;
case RSA_SIG_HASH:
/* EB = 00 | 01 | PS | 00 | D. */
bn_zero(m);
bn_lsh(m, m, 8);
bn_add_dig(m, m, RSA_PRV);
*p_len = k_len - 3 - m_len;
for (int i = 0; i < *p_len; i++) {
bn_lsh(m, m, 8);
bn_add_dig(m, m, RSA_PAD);
}
bn_lsh(m, m, 8);
bn_add_dig(m, m, 0);
/* Make room for the real message. */
bn_lsh(m, m, m_len * 8);
break;
case RSA_VER:
m_len = k_len - 1;
bn_rsh(t, m, 8 * m_len);
if (!bn_is_zero(t)) {
result = RLC_ERR;
}
m_len--;
bn_rsh(t, m, 8 * m_len);
pad = (uint8_t)t->dp[0];
if (pad != RSA_PRV) {
result = RLC_ERR;
}
do {
m_len--;
bn_rsh(t, m, 8 * m_len);
pad = (uint8_t)t->dp[0];
} while (pad != 0 && m_len > 0);
if (m_len == 0) {
result = RLC_ERR;
}
/* Remove padding and trailing zero. */
id = hash_id(MD_MAP, &len);
m_len -= len;
bn_rsh(t, m, m_len * 8);
int r = 0;
for (int i = 0; i < len; i++) {
pad = (uint8_t)t->dp[0];
r |= pad - id[len - i - 1];
bn_rsh(t, t, 8);
}
*p_len = k_len - m_len;
bn_mod_2b(m, m, m_len * 8);
result = (r == 0 ? RLC_OK : RLC_ERR);
break;
case RSA_VER_HASH:
m_len = k_len - 1;
bn_rsh(t, m, 8 * m_len);
if (!bn_is_zero(t)) {
result = RLC_ERR;
}
m_len--;
bn_rsh(t, m, 8 * m_len);
pad = (uint8_t)t->dp[0];
if (pad != RSA_PRV) {
result = RLC_ERR;
}
do {
m_len--;
bn_rsh(t, m, 8 * m_len);
pad = (uint8_t)t->dp[0];
} while (pad != 0 && m_len > 0);
if (m_len == 0) {
result = RLC_ERR;
}
/* Remove padding and trailing zero. */
*p_len = k_len - m_len;
bn_mod_2b(m, m, m_len * 8);
break;
}
}
RLC_CATCH_ANY {
result = RLC_ERR;
}
RLC_FINALLY {
bn_free(t);
}
return result;
} | 961 | True | 1 |
|
CVE-2020-36315 | False | False | False | False | AV:N/AC:L/Au:N/C:N/I:P/A:N | NETWORK | LOW | NONE | NONE | PARTIAL | NONE | 5.0 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:L/A:N | NETWORK | LOW | NONE | NONE | UNCHANGED | NONE | LOW | NONE | 5.3 | MEDIUM | 3.9 | 1.4 | False | [{'url': 'https://github.com/relic-toolkit/relic/commit/76c9a1fdf19d9e92e566a77376673e522aae9f80', 'name': 'https://github.com/relic-toolkit/relic/commit/76c9a1fdf19d9e92e566a77376673e522aae9f80', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/relic-toolkit/relic/tree/32eb4c257fc80328061d66639b1cdb35dbed51a2', 'name': 'https://github.com/relic-toolkit/relic/tree/32eb4c257fc80328061d66639b1cdb35dbed51a2', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/relic-toolkit/relic/', 'name': 'https://github.com/relic-toolkit/relic/', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/relic-toolkit/relic/issues/154', 'name': 'https://github.com/relic-toolkit/relic/issues/154', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-327'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:relic_project:relic:*:*:*:*:*:*:*:*', 'versionEndExcluding': '2020-08-01', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'In RELIC before 2020-08-01, RSA PKCS#1 v1.5 signature forgery can occur because certain checks of the padding (and of the first two bytes) are inadequate. NOTE: this requires that a low public exponent (such as 3) is being used. The product, by default, does not generate RSA keys with such a low number.'}] | 2022-07-12T17:42Z | 2021-04-07T21:15Z | Use of a Broken or Risky Cryptographic Algorithm | The use of a broken or risky cryptographic algorithm is an unnecessary risk that may result in the exposure of sensitive information. | The use of a non-standard algorithm is dangerous because a determined attacker may be able to break the algorithm and compromise whatever data has been protected. Well-known techniques may exist to break the algorithm.
| https://cwe.mitre.org/data/definitions/327.html | 0 | Diego F. Aranha | 2020-08-02 01:53:19+02:00 | Fix #154 and #155 by inverting the padding check logic and being more rigorous. | 76c9a1fdf19d9e92e566a77376673e522aae9f80 | False | relic-toolkit/relic | Code | 2014-08-18 21:34:41 | 2022-08-14 00:03:59 | relic-toolkit | 352.0 | 145.0 | pad_pkcs2 | pad_pkcs2( bn_t m , int * p_len , int m_len , int k_len , int operation) | ['m', 'p_len', 'm_len', 'k_len', 'operation'] | static int pad_pkcs2(bn_t m, int *p_len, int m_len, int k_len, int operation) {
uint8_t pad, h1[RLC_MD_LEN], h2[RLC_MD_LEN];
/* MSVC does not allow dynamic stack arrays */
uint8_t *mask = RLC_ALLOCA(uint8_t, k_len);
int result = RLC_OK;
bn_t t;
bn_null(t);
RLC_TRY {
bn_new(t);
switch (operation) {
case RSA_ENC:
/* DB = lHash | PS | 01 | D. */
md_map(h1, NULL, 0);
bn_read_bin(m, h1, RLC_MD_LEN);
*p_len = k_len - 2 * RLC_MD_LEN - 2 - m_len;
bn_lsh(m, m, *p_len * 8);
bn_lsh(m, m, 8);
bn_add_dig(m, m, 0x01);
/* Make room for the real message. */
bn_lsh(m, m, m_len * 8);
break;
case RSA_ENC_FIN:
/* EB = 00 | maskedSeed | maskedDB. */
rand_bytes(h1, RLC_MD_LEN);
md_mgf(mask, k_len - RLC_MD_LEN - 1, h1, RLC_MD_LEN);
bn_read_bin(t, mask, k_len - RLC_MD_LEN - 1);
for (int i = 0; i < t->used; i++) {
m->dp[i] ^= t->dp[i];
}
bn_write_bin(mask, k_len - RLC_MD_LEN - 1, m);
md_mgf(h2, RLC_MD_LEN, mask, k_len - RLC_MD_LEN - 1);
for (int i = 0; i < RLC_MD_LEN; i++) {
h1[i] ^= h2[i];
}
bn_read_bin(t, h1, RLC_MD_LEN);
bn_lsh(t, t, 8 * (k_len - RLC_MD_LEN - 1));
bn_add(t, t, m);
bn_copy(m, t);
break;
case RSA_DEC:
m_len = k_len - 1;
bn_rsh(t, m, 8 * m_len);
if (!bn_is_zero(t)) {
result = RLC_ERR;
}
m_len -= RLC_MD_LEN;
bn_rsh(t, m, 8 * m_len);
bn_write_bin(h1, RLC_MD_LEN, t);
bn_mod_2b(m, m, 8 * m_len);
bn_write_bin(mask, m_len, m);
md_mgf(h2, RLC_MD_LEN, mask, m_len);
for (int i = 0; i < RLC_MD_LEN; i++) {
h1[i] ^= h2[i];
}
md_mgf(mask, k_len - RLC_MD_LEN - 1, h1, RLC_MD_LEN);
bn_read_bin(t, mask, k_len - RLC_MD_LEN - 1);
for (int i = 0; i < t->used; i++) {
m->dp[i] ^= t->dp[i];
}
m_len -= RLC_MD_LEN;
bn_rsh(t, m, 8 * m_len);
bn_write_bin(h2, RLC_MD_LEN, t);
md_map(h1, NULL, 0);
pad = 0;
for (int i = 0; i < RLC_MD_LEN; i++) {
pad |= h1[i] - h2[i];
}
if (result == RLC_OK) {
result = (pad ? RLC_ERR : RLC_OK);
}
bn_mod_2b(m, m, 8 * m_len);
*p_len = bn_size_bin(m);
(*p_len)--;
bn_rsh(t, m, *p_len * 8);
if (bn_cmp_dig(t, 1) != RLC_EQ) {
result = RLC_ERR;
}
bn_mod_2b(m, m, *p_len * 8);
*p_len = k_len - *p_len;
break;
case RSA_SIG:
case RSA_SIG_HASH:
/* M' = 00 00 00 00 00 00 00 00 | H(M). */
bn_zero(m);
bn_lsh(m, m, 64);
/* Make room for the real message. */
bn_lsh(m, m, RLC_MD_LEN * 8);
break;
case RSA_SIG_FIN:
memset(mask, 0, 8);
bn_write_bin(mask + 8, RLC_MD_LEN, m);
md_map(h1, mask, RLC_MD_LEN + 8);
bn_read_bin(m, h1, RLC_MD_LEN);
md_mgf(mask, k_len - RLC_MD_LEN - 1, h1, RLC_MD_LEN);
bn_read_bin(t, mask, k_len - RLC_MD_LEN - 1);
t->dp[0] ^= 0x01;
/* m_len is now the size in bits of the modulus. */
bn_lsh(t, t, 8 * RLC_MD_LEN);
bn_add(m, t, m);
bn_lsh(m, m, 8);
bn_add_dig(m, m, RSA_PSS);
for (int i = m_len - 1; i < 8 * k_len; i++) {
bn_set_bit(m, i, 0);
}
break;
case RSA_VER:
case RSA_VER_HASH:
bn_mod_2b(t, m, 8);
if (bn_cmp_dig(t, RSA_PSS) != RLC_EQ) {
result = RLC_ERR;
} else {
for (int i = m_len; i < 8 * k_len; i++) {
if (bn_get_bit(m, i) != 0) {
result = RLC_ERR;
}
}
bn_rsh(m, m, 8);
bn_mod_2b(t, m, 8 * RLC_MD_LEN);
bn_write_bin(h2, RLC_MD_LEN, t);
bn_rsh(m, m, 8 * RLC_MD_LEN);
bn_write_bin(h1, RLC_MD_LEN, t);
md_mgf(mask, k_len - RLC_MD_LEN - 1, h1, RLC_MD_LEN);
bn_read_bin(t, mask, k_len - RLC_MD_LEN - 1);
for (int i = 0; i < t->used; i++) {
m->dp[i] ^= t->dp[i];
}
m->dp[0] ^= 0x01;
for (int i = m_len - 1; i < 8 * k_len; i++) {
bn_set_bit(m, i - ((RLC_MD_LEN + 1) * 8), 0);
}
if (!bn_is_zero(m)) {
result = RLC_ERR;
}
bn_read_bin(m, h2, RLC_MD_LEN);
*p_len = k_len - RLC_MD_LEN;
}
break;
}
}
RLC_CATCH_ANY {
result = RLC_ERR;
}
RLC_FINALLY {
bn_free(t);
}
RLC_FREE(mask);
return result;
} | 1114 | True | 1 |
|
CVE-2020-36316 | False | False | False | True | AV:N/AC:M/Au:N/C:N/I:N/A:P | NETWORK | MEDIUM | NONE | NONE | NONE | PARTIAL | 4.3 | CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:N/I:N/A:H | LOCAL | LOW | NONE | REQUIRED | UNCHANGED | NONE | NONE | HIGH | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/relic-toolkit/relic/commit/76c9a1fdf19d9e92e566a77376673e522aae9f80', 'name': 'https://github.com/relic-toolkit/relic/commit/76c9a1fdf19d9e92e566a77376673e522aae9f80', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/relic-toolkit/relic/tree/32eb4c257fc80328061d66639b1cdb35dbed51a2', 'name': 'https://github.com/relic-toolkit/relic/tree/32eb4c257fc80328061d66639b1cdb35dbed51a2', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/relic-toolkit/relic/', 'name': 'https://github.com/relic-toolkit/relic/', 'refsource': 'MISC', 'tags': ['Product', 'Third Party Advisory']}, {'url': 'https://github.com/relic-toolkit/relic/issues/155', 'name': 'https://github.com/relic-toolkit/relic/issues/155', 'refsource': 'MISC', 'tags': ['Exploit', 'Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:relic_project:relic:*:*:*:*:*:*:*:*', 'versionEndExcluding': '2021-04-03', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'In RELIC before 2021-04-03, there is a buffer overflow in PKCS#1 v1.5 signature verification because garbage bytes can be present.'}] | 2021-04-16T13:55Z | 2021-04-07T21:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Diego F. Aranha | 2020-08-02 01:53:19+02:00 | Fix #154 and #155 by inverting the padding check logic and being more rigorous. | 76c9a1fdf19d9e92e566a77376673e522aae9f80 | False | relic-toolkit/relic | Code | 2014-08-18 21:34:41 | 2022-08-14 00:03:59 | relic-toolkit | 352.0 | 145.0 | pad_pkcs2 | pad_pkcs2( bn_t m , int * p_len , int m_len , int k_len , int operation) | ['m', 'p_len', 'm_len', 'k_len', 'operation'] | static int pad_pkcs2(bn_t m, int *p_len, int m_len, int k_len, int operation) {
uint8_t pad, h1[RLC_MD_LEN], h2[RLC_MD_LEN];
/* MSVC does not allow dynamic stack arrays */
uint8_t *mask = RLC_ALLOCA(uint8_t, k_len);
int result = RLC_OK;
bn_t t;
bn_null(t);
RLC_TRY {
bn_new(t);
switch (operation) {
case RSA_ENC:
/* DB = lHash | PS | 01 | D. */
md_map(h1, NULL, 0);
bn_read_bin(m, h1, RLC_MD_LEN);
*p_len = k_len - 2 * RLC_MD_LEN - 2 - m_len;
bn_lsh(m, m, *p_len * 8);
bn_lsh(m, m, 8);
bn_add_dig(m, m, 0x01);
/* Make room for the real message. */
bn_lsh(m, m, m_len * 8);
break;
case RSA_ENC_FIN:
/* EB = 00 | maskedSeed | maskedDB. */
rand_bytes(h1, RLC_MD_LEN);
md_mgf(mask, k_len - RLC_MD_LEN - 1, h1, RLC_MD_LEN);
bn_read_bin(t, mask, k_len - RLC_MD_LEN - 1);
for (int i = 0; i < t->used; i++) {
m->dp[i] ^= t->dp[i];
}
bn_write_bin(mask, k_len - RLC_MD_LEN - 1, m);
md_mgf(h2, RLC_MD_LEN, mask, k_len - RLC_MD_LEN - 1);
for (int i = 0; i < RLC_MD_LEN; i++) {
h1[i] ^= h2[i];
}
bn_read_bin(t, h1, RLC_MD_LEN);
bn_lsh(t, t, 8 * (k_len - RLC_MD_LEN - 1));
bn_add(t, t, m);
bn_copy(m, t);
break;
case RSA_DEC:
m_len = k_len - 1;
bn_rsh(t, m, 8 * m_len);
if (!bn_is_zero(t)) {
result = RLC_ERR;
}
m_len -= RLC_MD_LEN;
bn_rsh(t, m, 8 * m_len);
bn_write_bin(h1, RLC_MD_LEN, t);
bn_mod_2b(m, m, 8 * m_len);
bn_write_bin(mask, m_len, m);
md_mgf(h2, RLC_MD_LEN, mask, m_len);
for (int i = 0; i < RLC_MD_LEN; i++) {
h1[i] ^= h2[i];
}
md_mgf(mask, k_len - RLC_MD_LEN - 1, h1, RLC_MD_LEN);
bn_read_bin(t, mask, k_len - RLC_MD_LEN - 1);
for (int i = 0; i < t->used; i++) {
m->dp[i] ^= t->dp[i];
}
m_len -= RLC_MD_LEN;
bn_rsh(t, m, 8 * m_len);
bn_write_bin(h2, RLC_MD_LEN, t);
md_map(h1, NULL, 0);
pad = 0;
for (int i = 0; i < RLC_MD_LEN; i++) {
pad |= h1[i] - h2[i];
}
if (result == RLC_OK) {
result = (pad ? RLC_ERR : RLC_OK);
}
bn_mod_2b(m, m, 8 * m_len);
*p_len = bn_size_bin(m);
(*p_len)--;
bn_rsh(t, m, *p_len * 8);
if (bn_cmp_dig(t, 1) != RLC_EQ) {
result = RLC_ERR;
}
bn_mod_2b(m, m, *p_len * 8);
*p_len = k_len - *p_len;
break;
case RSA_SIG:
case RSA_SIG_HASH:
/* M' = 00 00 00 00 00 00 00 00 | H(M). */
bn_zero(m);
bn_lsh(m, m, 64);
/* Make room for the real message. */
bn_lsh(m, m, RLC_MD_LEN * 8);
break;
case RSA_SIG_FIN:
memset(mask, 0, 8);
bn_write_bin(mask + 8, RLC_MD_LEN, m);
md_map(h1, mask, RLC_MD_LEN + 8);
bn_read_bin(m, h1, RLC_MD_LEN);
md_mgf(mask, k_len - RLC_MD_LEN - 1, h1, RLC_MD_LEN);
bn_read_bin(t, mask, k_len - RLC_MD_LEN - 1);
t->dp[0] ^= 0x01;
/* m_len is now the size in bits of the modulus. */
bn_lsh(t, t, 8 * RLC_MD_LEN);
bn_add(m, t, m);
bn_lsh(m, m, 8);
bn_add_dig(m, m, RSA_PSS);
for (int i = m_len - 1; i < 8 * k_len; i++) {
bn_set_bit(m, i, 0);
}
break;
case RSA_VER:
case RSA_VER_HASH:
bn_mod_2b(t, m, 8);
if (bn_cmp_dig(t, RSA_PSS) != RLC_EQ) {
result = RLC_ERR;
} else {
for (int i = m_len; i < 8 * k_len; i++) {
if (bn_get_bit(m, i) != 0) {
result = RLC_ERR;
}
}
bn_rsh(m, m, 8);
bn_mod_2b(t, m, 8 * RLC_MD_LEN);
bn_write_bin(h2, RLC_MD_LEN, t);
bn_rsh(m, m, 8 * RLC_MD_LEN);
bn_write_bin(h1, RLC_MD_LEN, t);
md_mgf(mask, k_len - RLC_MD_LEN - 1, h1, RLC_MD_LEN);
bn_read_bin(t, mask, k_len - RLC_MD_LEN - 1);
for (int i = 0; i < t->used; i++) {
m->dp[i] ^= t->dp[i];
}
m->dp[0] ^= 0x01;
for (int i = m_len - 1; i < 8 * k_len; i++) {
bn_set_bit(m, i - ((RLC_MD_LEN + 1) * 8), 0);
}
if (!bn_is_zero(m)) {
result = RLC_ERR;
}
bn_read_bin(m, h2, RLC_MD_LEN);
*p_len = k_len - RLC_MD_LEN;
}
break;
}
}
RLC_CATCH_ANY {
result = RLC_ERR;
}
RLC_FINALLY {
bn_free(t);
}
RLC_FREE(mask);
return result;
} | 1114 | True | 1 |
|
CVE-2020-36400 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/zeromq/libzmq/commit/397ac80850bf8d010fae23dd215db0ee2c677306', 'name': 'https://github.com/zeromq/libzmq/commit/397ac80850bf8d010fae23dd215db0ee2c677306', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=26042', 'name': 'https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=26042', 'refsource': 'MISC', 'tags': ['Issue Tracking', 'Mailing List', 'Patch', 'Third Party Advisory']}, {'url': 'https://github.com/google/oss-fuzz-vulns/blob/main/vulns/libzmq/OSV-2020-1887.yaml', 'name': 'https://github.com/google/oss-fuzz-vulns/blob/main/vulns/libzmq/OSV-2020-1887.yaml', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-787'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:zeromq:libzmq:4.3.3:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'ZeroMQ libzmq 4.3.3 has a heap-based buffer overflow in zmq::tcp_read, a different vulnerability than CVE-2021-20235.'}] | 2021-07-06T11:41Z | 2021-07-01T03:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Luca Boccassi | 2020-05-07 00:19:40+01:00 | Problem: ZMTP v1 static allocator is needlessly resized
Solution: don't do it, resizing the shared allocator makes sense
as it can take the message buff for zero copy, but the static allocator
is fixed | 397ac80850bf8d010fae23dd215db0ee2c677306 | False | zeromq/libzmq | ZeroMQ core engine in C++, implements ZMTP/3.1 | 2009-07-29 09:56:41 | 2022-08-19 09:25:23 | https://www.zeromq.org | zeromq | 7957.0 | 2171.0 | zmq::c_single_allocator::resize | zmq::c_single_allocator::resize( std :: size_t new_size_) | ['new_size_'] | void resize (std::size_t new_size_) { _buf_size = new_size_; } | 13 | True | 1 |
CVE-2020-36403 | False | False | False | True | AV:N/AC:M/Au:N/C:P/I:P/A:P | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | PARTIAL | 6.8 | CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | REQUIRED | UNCHANGED | HIGH | HIGH | HIGH | 8.8 | HIGH | 2.8 | 5.9 | False | [{'url': 'https://github.com/google/oss-fuzz-vulns/blob/main/vulns/htslib/OSV-2020-955.yaml', 'name': 'https://github.com/google/oss-fuzz-vulns/blob/main/vulns/htslib/OSV-2020-955.yaml', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/samtools/htslib/commit/dcd4b7304941a8832fba2d0fc4c1e716e7a4e72c', 'name': 'https://github.com/samtools/htslib/commit/dcd4b7304941a8832fba2d0fc4c1e716e7a4e72c', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=24097', 'name': 'https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=24097', 'refsource': 'MISC', 'tags': ['Exploit', 'Issue Tracking', 'Patch', 'Third Party Advisory']}, {'url': 'https://github.com/samtools/htslib/pull/1447', 'name': 'https://github.com/samtools/htslib/pull/1447', 'refsource': 'MISC', 'tags': []}] | [{'description': [{'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'AND', 'children': [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:htslib:htslib:*:*:*:*:*:*:*:*', 'versionStartIncluding': '1.10', 'versionEndIncluding': '1.10.2', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': False, 'cpe23Uri': 'cpe:2.3:o:linux:linux_kernel:-:*:*:*:*:*:*:*', 'cpe_name': []}]}], 'cpe_match': []}] | [{'lang': 'en', 'value': 'HTSlib through 1.10.2 allows out-of-bounds write access in vcf_parse_format (called from vcf_parse and vcf_read).'}] | 2022-06-10T12:15Z | 2021-07-01T03:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Rob Davies | 2020-07-13 11:48:49+01:00 | Fix check for VCF record size
The check for excessive record size in vcf_parse_format() only
looked at individual fields. It was therefore possible to
exceed the limit and overflow fmt_aux_t::offset by having
multiple fields with a combined size that went over INT_MAX.
Fix by including the amount of memory used so far in the check.
Credit to OSS-Fuzz
Fixes oss-fuzz 24097 | dcd4b7304941a8832fba2d0fc4c1e716e7a4e72c | False | samtools/htslib | C library for high-throughput sequencing data formats | 2012-05-15 19:34:48 | 2022-08-24 08:53:47 | samtools | 632.0 | 409.0 | vcf_parse_format | vcf_parse_format( kstring_t * s , const bcf_hdr_t * h , bcf1_t * v , char * p , char * q) | ['s', 'h', 'v', 'p', 'q'] | static int vcf_parse_format(kstring_t *s, const bcf_hdr_t *h, bcf1_t *v, char *p, char *q)
{
if ( !bcf_hdr_nsamples(h) ) return 0;
static int extreme_val_warned = 0;
char *r, *t;
int j, l, m, g, overflow = 0;
khint_t k;
ks_tokaux_t aux1;
vdict_t *d = (vdict_t*)h->dict[BCF_DT_ID];
kstring_t *mem = (kstring_t*)&h->mem;
fmt_aux_t fmt[MAX_N_FMT];
mem->l = 0;
char *end = s->s + s->l;
if ( q>=end )
{
hts_log_error("FORMAT column with no sample columns starting at %s:%"PRIhts_pos"", bcf_seqname_safe(h,v), v->pos+1);
v->errcode |= BCF_ERR_NCOLS;
return -1;
}
v->n_fmt = 0;
if ( p[0]=='.' && p[1]==0 ) // FORMAT field is empty "."
{
v->n_sample = bcf_hdr_nsamples(h);
return 0;
}
// get format information from the dictionary
for (j = 0, t = kstrtok(p, ":", &aux1); t; t = kstrtok(0, 0, &aux1), ++j) {
if (j >= MAX_N_FMT) {
v->errcode |= BCF_ERR_LIMITS;
hts_log_error("FORMAT column at %s:%"PRIhts_pos" lists more identifiers than htslib can handle",
bcf_seqname_safe(h,v), v->pos+1);
return -1;
}
*(char*)aux1.p = 0;
k = kh_get(vdict, d, t);
if (k == kh_end(d) || kh_val(d, k).info[BCF_HL_FMT] == 15) {
if ( t[0]=='.' && t[1]==0 )
{
hts_log_error("Invalid FORMAT tag name '.' at %s:%"PRIhts_pos, bcf_seqname_safe(h,v), v->pos+1);
v->errcode |= BCF_ERR_TAG_INVALID;
return -1;
}
hts_log_warning("FORMAT '%s' at %s:%"PRIhts_pos" is not defined in the header, assuming Type=String", t, bcf_seqname_safe(h,v), v->pos+1);
kstring_t tmp = {0,0,0};
int l;
ksprintf(&tmp, "##FORMAT=<ID=%s,Number=1,Type=String,Description=\"Dummy\">", t);
bcf_hrec_t *hrec = bcf_hdr_parse_line(h,tmp.s,&l);
free(tmp.s);
int res = hrec ? bcf_hdr_add_hrec((bcf_hdr_t*)h, hrec) : -1;
if (res < 0) bcf_hrec_destroy(hrec);
if (res > 0) res = bcf_hdr_sync((bcf_hdr_t*)h);
k = kh_get(vdict, d, t);
v->errcode = BCF_ERR_TAG_UNDEF;
if (res || k == kh_end(d)) {
hts_log_error("Could not add dummy header for FORMAT '%s' at %s:%"PRIhts_pos, t, bcf_seqname_safe(h,v), v->pos+1);
v->errcode |= BCF_ERR_TAG_INVALID;
return -1;
}
}
fmt[j].max_l = fmt[j].max_m = fmt[j].max_g = 0;
fmt[j].key = kh_val(d, k).id;
fmt[j].is_gt = !strcmp(t, "GT");
fmt[j].y = h->id[0][fmt[j].key].val->info[BCF_HL_FMT];
v->n_fmt++;
}
// compute max
int n_sample_ori = -1;
r = q + 1; // r: position in the format string
l = 0, m = g = 1, v->n_sample = 0; // m: max vector size, l: max field len, g: max number of alleles
while ( r<end )
{
// can we skip some samples?
if ( h->keep_samples )
{
n_sample_ori++;
if ( !bit_array_test(h->keep_samples,n_sample_ori) )
{
while ( *r!='\t' && r<end ) r++;
if ( *r=='\t' ) { *r = 0; r++; }
continue;
}
}
// collect fmt stats: max vector size, length, number of alleles
j = 0; // j-th format field
fmt_aux_t *f = fmt;
for (;;) {
switch (*r) {
case ',':
m++;
break;
case '|':
case '/':
if (f->is_gt) g++;
break;
case '\t':
*r = 0; // fall through
case '\0':
case ':':
if (f->max_m < m) f->max_m = m;
if (f->max_l < l) f->max_l = l;
if (f->is_gt && f->max_g < g) f->max_g = g;
l = 0, m = g = 1;
if ( *r==':' ) {
j++; f++;
if ( j>=v->n_fmt ) {
hts_log_error("Incorrect number of FORMAT fields at %s:%"PRIhts_pos"",
h->id[BCF_DT_CTG][v->rid].key, v->pos+1);
v->errcode |= BCF_ERR_NCOLS;
return -1;
}
} else goto end_for;
break;
}
if ( r>=end ) break;
r++; l++;
}
end_for:
v->n_sample++;
if ( v->n_sample == bcf_hdr_nsamples(h) ) break;
r++;
}
// allocate memory for arrays
for (j = 0; j < v->n_fmt; ++j) {
fmt_aux_t *f = &fmt[j];
if ( !f->max_m ) f->max_m = 1; // omitted trailing format field
if ((f->y>>4&0xf) == BCF_HT_STR) {
f->size = f->is_gt? f->max_g << 2 : f->max_l;
} else if ((f->y>>4&0xf) == BCF_HT_REAL || (f->y>>4&0xf) == BCF_HT_INT) {
f->size = f->max_m << 2;
} else
{
hts_log_error("The format type %d at %s:%"PRIhts_pos" is currently not supported", f->y>>4&0xf, bcf_seqname_safe(h,v), v->pos+1);
v->errcode |= BCF_ERR_TAG_INVALID;
return -1;
}
if (align_mem(mem) < 0) {
hts_log_error("Memory allocation failure at %s:%"PRIhts_pos, bcf_seqname_safe(h,v), v->pos+1);
v->errcode |= BCF_ERR_LIMITS;
return -1;
}
f->offset = mem->l;
// Limit the total memory to ~2Gb per VCF row. This should mean
// malformed VCF data is less likely to take excessive memory and/or
// time.
if (v->n_sample * (uint64_t)f->size > INT_MAX) {
hts_log_error("Excessive memory required by FORMAT fields at %s:%"PRIhts_pos, bcf_seqname_safe(h,v), v->pos+1);
v->errcode |= BCF_ERR_LIMITS;
return -1;
}
if (ks_resize(mem, mem->l + v->n_sample * (size_t)f->size) < 0) {
hts_log_error("Memory allocation failure at %s:%"PRIhts_pos, bcf_seqname_safe(h,v), v->pos+1);
v->errcode |= BCF_ERR_LIMITS;
return -1;
}
mem->l += v->n_sample * f->size;
}
for (j = 0; j < v->n_fmt; ++j)
fmt[j].buf = (uint8_t*)mem->s + fmt[j].offset;
// fill the sample fields; at beginning of the loop, t points to the first char of a format
n_sample_ori = -1;
t = q + 1; m = 0; // m: sample id
while ( t<end )
{
// can we skip some samples?
if ( h->keep_samples )
{
n_sample_ori++;
if ( !bit_array_test(h->keep_samples,n_sample_ori) )
{
while ( *t && t<end ) t++;
t++;
continue;
}
}
if ( m == bcf_hdr_nsamples(h) ) break;
j = 0; // j-th format field, m-th sample
while ( t < end )
{
fmt_aux_t *z = &fmt[j++];
if (!z->buf) {
hts_log_error("Memory allocation failure for FORMAT field type %d at %s:%"PRIhts_pos,
z->y>>4&0xf, bcf_seqname_safe(h,v), v->pos+1);
v->errcode |= BCF_ERR_LIMITS;
return -1;
}
if ((z->y>>4&0xf) == BCF_HT_STR) {
if (z->is_gt) { // genotypes
int32_t is_phased = 0;
uint32_t *x = (uint32_t*)(z->buf + z->size * (size_t)m);
uint32_t unreadable = 0;
uint32_t max = 0;
overflow = 0;
for (l = 0;; ++t) {
if (*t == '.') {
++t, x[l++] = is_phased;
} else {
char *tt = t;
uint32_t val = hts_str2uint(t, &t, sizeof(val) * CHAR_MAX - 2, &overflow);
unreadable |= tt == t;
if (max < val) max = val;
x[l++] = (val + 1) << 1 | is_phased;
}
is_phased = (*t == '|');
if (*t != '|' && *t != '/') break;
}
// Possibly check max against v->n_allele instead?
if (overflow || max > (INT32_MAX >> 1) - 1) {
hts_log_error("Couldn't read GT data: value too large at %s:%"PRIhts_pos, bcf_seqname_safe(h,v), v->pos+1);
return -1;
}
if (unreadable) {
hts_log_error("Couldn't read GT data: value not a number or '.' at %s:%"PRIhts_pos, bcf_seqname_safe(h,v), v->pos+1);
return -1;
}
if ( !l ) x[l++] = 0; // An empty field, insert missing value
for (; l < z->size>>2; ++l) x[l] = bcf_int32_vector_end;
} else {
char *x = (char*)z->buf + z->size * (size_t)m;
for (r = t, l = 0; *t != ':' && *t; ++t) x[l++] = *t;
for (; l < z->size; ++l) x[l] = 0;
}
} else if ((z->y>>4&0xf) == BCF_HT_INT) {
int32_t *x = (int32_t*)(z->buf + z->size * (size_t)m);
for (l = 0;; ++t) {
if (*t == '.') {
x[l++] = bcf_int32_missing, ++t; // ++t to skip "."
} else {
overflow = 0;
char *te;
long int tmp_val = hts_str2int(t, &te, sizeof(tmp_val)*CHAR_BIT, &overflow);
if ( te==t || overflow || tmp_val<BCF_MIN_BT_INT32 || tmp_val>BCF_MAX_BT_INT32 )
{
if ( !extreme_val_warned )
{
hts_log_warning("Extreme FORMAT/%s value encountered and set to missing at %s:%"PRIhts_pos, h->id[BCF_DT_ID][fmt[j-1].key].key, bcf_seqname_safe(h,v), v->pos+1);
extreme_val_warned = 1;
}
tmp_val = bcf_int32_missing;
}
x[l++] = tmp_val;
t = te;
}
if (*t != ',') break;
}
if ( !l ) x[l++] = bcf_int32_missing;
for (; l < z->size>>2; ++l) x[l] = bcf_int32_vector_end;
} else if ((z->y>>4&0xf) == BCF_HT_REAL) {
float *x = (float*)(z->buf + z->size * (size_t)m);
for (l = 0;; ++t) {
if (*t == '.' && !isdigit_c(t[1])) {
bcf_float_set_missing(x[l++]), ++t; // ++t to skip "."
} else {
overflow = 0;
char *te;
float tmp_val = hts_str2dbl(t, &te, &overflow);
if ( (te==t || overflow) && !extreme_val_warned )
{
hts_log_warning("Extreme FORMAT/%s value encountered at %s:%"PRIhts_pos, h->id[BCF_DT_ID][fmt[j-1].key].key, bcf_seqname(h,v), v->pos+1);
extreme_val_warned = 1;
}
x[l++] = tmp_val;
t = te;
}
if (*t != ',') break;
}
if ( !l ) bcf_float_set_missing(x[l++]); // An empty field, insert missing value
for (; l < z->size>>2; ++l) bcf_float_set_vector_end(x[l]);
} else {
hts_log_error("Unknown FORMAT field type %d at %s:%"PRIhts_pos, z->y>>4&0xf, bcf_seqname_safe(h,v), v->pos+1);
v->errcode |= BCF_ERR_TAG_INVALID;
return -1;
}
if (*t == '\0') {
break;
}
else if (*t == ':') {
t++;
}
else {
char buffer[8];
hts_log_error("Invalid character %s in '%s' FORMAT field at %s:%"PRIhts_pos"",
hts_strprint(buffer, sizeof buffer, '\'', t, 1),
h->id[BCF_DT_ID][z->key].key, bcf_seqname_safe(h,v), v->pos+1);
v->errcode |= BCF_ERR_CHAR;
return -1;
}
}
for (; j < v->n_fmt; ++j) { // fill end-of-vector values
fmt_aux_t *z = &fmt[j];
if ((z->y>>4&0xf) == BCF_HT_STR) {
if (z->is_gt) {
int32_t *x = (int32_t*)(z->buf + z->size * (size_t)m);
if (z->size) x[0] = bcf_int32_missing;
for (l = 1; l < z->size>>2; ++l) x[l] = bcf_int32_vector_end;
} else {
char *x = (char*)z->buf + z->size * (size_t)m;
if ( z->size ) x[0] = '.';
for (l = 1; l < z->size; ++l) x[l] = 0;
}
} else if ((z->y>>4&0xf) == BCF_HT_INT) {
int32_t *x = (int32_t*)(z->buf + z->size * (size_t)m);
x[0] = bcf_int32_missing;
for (l = 1; l < z->size>>2; ++l) x[l] = bcf_int32_vector_end;
} else if ((z->y>>4&0xf) == BCF_HT_REAL) {
float *x = (float*)(z->buf + z->size * (size_t)m);
bcf_float_set_missing(x[0]);
for (l = 1; l < z->size>>2; ++l) bcf_float_set_vector_end(x[l]);
}
}
m++; t++;
}
// write individual genotype information
kstring_t *str = &v->indiv;
int i;
if (v->n_sample > 0) {
for (i = 0; i < v->n_fmt; ++i) {
fmt_aux_t *z = &fmt[i];
bcf_enc_int1(str, z->key);
if ((z->y>>4&0xf) == BCF_HT_STR && !z->is_gt) {
bcf_enc_size(str, z->size, BCF_BT_CHAR);
kputsn((char*)z->buf, z->size * (size_t)v->n_sample, str);
} else if ((z->y>>4&0xf) == BCF_HT_INT || z->is_gt) {
bcf_enc_vint(str, (z->size>>2) * v->n_sample, (int32_t*)z->buf, z->size>>2);
} else {
bcf_enc_size(str, z->size>>2, BCF_BT_FLOAT);
if (serialize_float_array(str, (z->size>>2) * (size_t)v->n_sample,
(float *) z->buf) != 0) {
v->errcode |= BCF_ERR_LIMITS;
hts_log_error("Out of memory at %s:%"PRIhts_pos, bcf_seqname_safe(h,v), v->pos+1);
return -1;
}
}
}
}
if ( v->n_sample!=bcf_hdr_nsamples(h) )
{
hts_log_error("Number of columns at %s:%"PRIhts_pos" does not match the number of samples (%d vs %d)",
bcf_seqname_safe(h,v), v->pos+1, v->n_sample, bcf_hdr_nsamples(h));
v->errcode |= BCF_ERR_NCOLS;
return -1;
}
if ( v->indiv.l > 0xffffffff )
{
hts_log_error("The FORMAT at %s:%"PRIhts_pos" is too long", bcf_seqname_safe(h,v), v->pos+1);
v->errcode |= BCF_ERR_LIMITS;
// Error recovery: return -1 if this is a critical error or 0 if we want to ignore the FORMAT and proceed
v->n_fmt = 0;
return -1;
}
return 0;
} | 3079 | True | 1 |
|
CVE-2020-36406 | False | False | False | True | AV:N/AC:M/Au:N/C:P/I:P/A:P | NETWORK | MEDIUM | NONE | PARTIAL | PARTIAL | PARTIAL | 6.8 | CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | REQUIRED | UNCHANGED | HIGH | HIGH | HIGH | 8.8 | HIGH | 2.8 | 5.9 | False | [{'url': 'https://github.com/google/oss-fuzz-vulns/blob/main/vulns/uwebsockets/OSV-2020-1695.yaml', 'name': 'https://github.com/google/oss-fuzz-vulns/blob/main/vulns/uwebsockets/OSV-2020-1695.yaml', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/uNetworking/uWebSockets/commit/03fca626a95130ab80f86adada54b29d27242759', 'name': 'https://github.com/uNetworking/uWebSockets/commit/03fca626a95130ab80f86adada54b29d27242759', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=25381', 'name': 'https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=25381', 'refsource': 'MISC', 'tags': ['Exploit', 'Issue Tracking', 'Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-787'}]}] | MEDIUM | [{'operator': 'AND', 'children': [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:uwebsockets_project:uwebsockets:18.11.0:*:*:*:*:node.js:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:uwebsockets_project:uwebsockets:18.12.0:*:*:*:*:node.js:*:*', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': False, 'cpe23Uri': 'cpe:2.3:o:linux:linux_kernel:-:*:*:*:*:*:*:*', 'cpe_name': []}]}], 'cpe_match': []}] | [{'lang': 'en', 'value': '** DISPUTED ** uWebSockets 18.11.0 and 18.12.0 has a stack-based buffer overflow in uWS::TopicTree::trimTree (called from uWS::TopicTree::unsubscribeAll). NOTE: the vendor\'s position is that this is "a minor issue or not even an issue at all" because the developer of an application (that uses uWebSockets) should not be allowing the large number of triggered topics to accumulate.'}] | 2022-04-29T01:55Z | 2021-07-01T03:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Alex Hultman | 2020-09-06 13:05:13+02:00 | Fix overflow of triggered topics | 03fca626a95130ab80f86adada54b29d27242759 | False | uNetworking/uWebSockets | Simple, secure & standards compliant web server for the most demanding of applications | 2016-03-21 04:39:40 | 2022-07-28 18:26:09 | uNetworking | 14115.0 | 1586.0 | uWS::TopicTree::publish | uWS::TopicTree::publish( Topic * iterator , size_t start , size_t stop , std :: string_view topic , std :: pair<std::string_view,std::string_view> message) | ['iterator', 'start', 'stop', 'topic', 'message'] | void publish(Topic *iterator, size_t start, size_t stop, std::string_view topic, std::pair<std::string_view, std::string_view> message) {
/* If we already have 64 triggered topics make sure to drain it here */
if (numTriggeredTopics == 64) {
drain();
}
/* Iterate over all segments in given topic */
for (; stop != std::string::npos; start = stop + 1) {
stop = topic.find('/', start);
std::string_view segment = topic.substr(start, stop - start);
/* It is very important to disallow wildcards when publishing.
* We will not catch EVERY misuse this lazy way, but enough to hinder
* explosive recursion.
* Terminating wildcards MAY still get triggered along the way, if for
* instace the error is found late while iterating the topic segments. */
if (segment.length() == 1) {
if (segment[0] == '+' || segment[0] == '#') {
return;
}
}
/* Do we have a terminating wildcard child? */
if (iterator->terminatingWildcardChild) {
iterator->terminatingWildcardChild->messages[messageId] = message;
/* Add this topic to triggered */
if (!iterator->terminatingWildcardChild->triggered) {
triggeredTopics[numTriggeredTopics++] = iterator->terminatingWildcardChild;
iterator->terminatingWildcardChild->triggered = true;
}
}
/* Do we have a wildcard child? */
if (iterator->wildcardChild) {
publish(iterator->wildcardChild, stop + 1, stop, topic, message);
}
std::map<std::string_view, Topic *>::iterator it = iterator->children.find(segment);
if (it == iterator->children.end()) {
/* Stop trying to match by exact string */
return;
}
iterator = it->second;
}
/* If we went all the way we matched exactly */
iterator->messages[messageId] = message;
/* Add this topic to triggered */
if (!iterator->triggered) {
triggeredTopics[numTriggeredTopics++] = iterator;
iterator->triggered = true;
}
} | 274 | True | 1 |
|
CVE-2020-36429 | False | False | False | False | AV:L/AC:L/Au:N/C:N/I:N/A:P | LOCAL | LOW | NONE | NONE | NONE | PARTIAL | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/open62541/open62541/commit/c800e2987b10bb3af6ef644b515b5d6392f8861d', 'name': 'https://github.com/open62541/open62541/commit/c800e2987b10bb3af6ef644b515b5d6392f8861d', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=20578', 'name': 'https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=20578', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/google/oss-fuzz-vulns/blob/main/vulns/open62541/OSV-2020-153.yaml', 'name': 'https://github.com/google/oss-fuzz-vulns/blob/main/vulns/open62541/OSV-2020-153.yaml', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/open62541/open62541/compare/v1.0.3...v1.0.4', 'name': 'https://github.com/open62541/open62541/compare/v1.0.3...v1.0.4', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-787'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:open62541:open62541:*:*:*:*:*:*:*:*', 'versionStartIncluding': '1.0.0', 'versionEndExcluding': '1.0.4', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Variant_encodeJson in open62541 1.x before 1.0.4 has an out-of-bounds write for a large recursion depth.'}] | 2021-07-28T19:28Z | 2021-07-20T07:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Julius Pfrommer | 2020-05-19 15:13:20+02:00 | fix(json): Check max recursion depth in more places | c800e2987b10bb3af6ef644b515b5d6392f8861d | False | open62541/open62541 | Open source implementation of OPC UA (OPC Unified Architecture) aka IEC 62541 licensed under Mozilla Public License v2.0 | 2013-12-20 08:45:05 | 2022-08-26 13:35:19 | http://open62541.org | open62541 | 1865.0 | 962.0 | WRITE_JSON_ELEMENT | WRITE_JSON_ELEMENT( ArrStart) | ['ArrStart'] | WRITE_JSON_ELEMENT(ArrStart) {
/* increase depth, save: before first array entry no comma needed. */
ctx->commaNeeded[++ctx->depth] = false;
return writeChar(ctx, '[');
} | 26 | True | 1 |
CVE-2020-36429 | False | False | False | False | AV:L/AC:L/Au:N/C:N/I:N/A:P | LOCAL | LOW | NONE | NONE | NONE | PARTIAL | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/open62541/open62541/commit/c800e2987b10bb3af6ef644b515b5d6392f8861d', 'name': 'https://github.com/open62541/open62541/commit/c800e2987b10bb3af6ef644b515b5d6392f8861d', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=20578', 'name': 'https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=20578', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/google/oss-fuzz-vulns/blob/main/vulns/open62541/OSV-2020-153.yaml', 'name': 'https://github.com/google/oss-fuzz-vulns/blob/main/vulns/open62541/OSV-2020-153.yaml', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/open62541/open62541/compare/v1.0.3...v1.0.4', 'name': 'https://github.com/open62541/open62541/compare/v1.0.3...v1.0.4', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-787'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:open62541:open62541:*:*:*:*:*:*:*:*', 'versionStartIncluding': '1.0.0', 'versionEndExcluding': '1.0.4', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Variant_encodeJson in open62541 1.x before 1.0.4 has an out-of-bounds write for a large recursion depth.'}] | 2021-07-28T19:28Z | 2021-07-20T07:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Julius Pfrommer | 2020-05-19 15:13:20+02:00 | fix(json): Check max recursion depth in more places | c800e2987b10bb3af6ef644b515b5d6392f8861d | False | open62541/open62541 | Open source implementation of OPC UA (OPC Unified Architecture) aka IEC 62541 licensed under Mozilla Public License v2.0 | 2013-12-20 08:45:05 | 2022-08-26 13:35:19 | http://open62541.org | open62541 | 1865.0 | 962.0 | addMultiArrayContentJSON | addMultiArrayContentJSON( CtxJson * ctx , void * array , const UA_DataType * type , size_t * index , UA_UInt32 * arrayDimensions , size_t dimensionIndex , size_t dimensionSize) | ['ctx', 'array', 'type', 'index', 'arrayDimensions', 'dimensionIndex', 'dimensionSize'] | addMultiArrayContentJSON(CtxJson *ctx, void* array, const UA_DataType *type,
size_t *index, UA_UInt32 *arrayDimensions, size_t dimensionIndex,
size_t dimensionSize) {
/* Check the recursion limit */
if(ctx->depth > UA_JSON_ENCODING_MAX_RECURSION)
return UA_STATUSCODE_BADENCODINGERROR;
/* Stop recursion: The inner Arrays are written */
status ret;
if(dimensionIndex == (dimensionSize - 1)) {
ret = encodeJsonArray(ctx, ((u8*)array) + (type->memSize * *index),
arrayDimensions[dimensionIndex], type);
(*index) += arrayDimensions[dimensionIndex];
return ret;
}
/* Recurse to the next dimension */
ret = writeJsonArrStart(ctx);
for(size_t i = 0; i < arrayDimensions[dimensionIndex]; i++) {
ret |= writeJsonCommaIfNeeded(ctx);
ret |= addMultiArrayContentJSON(ctx, array, type, index, arrayDimensions,
dimensionIndex + 1, dimensionSize);
ctx->commaNeeded[ctx->depth] = true;
if(ret != UA_STATUSCODE_GOOD)
return ret;
}
ret |= writeJsonArrEnd(ctx);
return ret;
} | 185 | True | 1 |
CVE-2020-36429 | False | False | False | False | AV:L/AC:L/Au:N/C:N/I:N/A:P | LOCAL | LOW | NONE | NONE | NONE | PARTIAL | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/open62541/open62541/commit/c800e2987b10bb3af6ef644b515b5d6392f8861d', 'name': 'https://github.com/open62541/open62541/commit/c800e2987b10bb3af6ef644b515b5d6392f8861d', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=20578', 'name': 'https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=20578', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/google/oss-fuzz-vulns/blob/main/vulns/open62541/OSV-2020-153.yaml', 'name': 'https://github.com/google/oss-fuzz-vulns/blob/main/vulns/open62541/OSV-2020-153.yaml', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/open62541/open62541/compare/v1.0.3...v1.0.4', 'name': 'https://github.com/open62541/open62541/compare/v1.0.3...v1.0.4', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-787'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:open62541:open62541:*:*:*:*:*:*:*:*', 'versionStartIncluding': '1.0.0', 'versionEndExcluding': '1.0.4', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Variant_encodeJson in open62541 1.x before 1.0.4 has an out-of-bounds write for a large recursion depth.'}] | 2021-07-28T19:28Z | 2021-07-20T07:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Julius Pfrommer | 2020-05-19 15:13:20+02:00 | fix(json): Check max recursion depth in more places | c800e2987b10bb3af6ef644b515b5d6392f8861d | False | open62541/open62541 | Open source implementation of OPC UA (OPC Unified Architecture) aka IEC 62541 licensed under Mozilla Public License v2.0 | 2013-12-20 08:45:05 | 2022-08-26 13:35:19 | http://open62541.org | open62541 | 1865.0 | 962.0 | decodeJsonStructure | decodeJsonStructure( void * dst , const UA_DataType * type , CtxJson * ctx , ParseCtx * parseCtx , UA_Boolean moveToken) | ['dst', 'type', 'ctx', 'parseCtx', 'moveToken'] | decodeJsonStructure(void *dst, const UA_DataType *type, CtxJson *ctx,
ParseCtx *parseCtx, UA_Boolean moveToken) {
(void) moveToken;
/* Check the recursion limit */
if(ctx->depth > UA_JSON_ENCODING_MAX_RECURSION)
return UA_STATUSCODE_BADENCODINGERROR;
ctx->depth++;
uintptr_t ptr = (uintptr_t)dst;
status ret = UA_STATUSCODE_GOOD;
u8 membersSize = type->membersSize;
const UA_DataType *typelists[2] = { UA_TYPES, &type[-type->typeIndex] };
UA_STACKARRAY(DecodeEntry, entries, membersSize);
for(size_t i = 0; i < membersSize && ret == UA_STATUSCODE_GOOD; ++i) {
const UA_DataTypeMember *m = &type->members[i];
const UA_DataType *mt = &typelists[!m->namespaceZero][m->memberTypeIndex];
entries[i].type = mt;
if(!m->isArray) {
ptr += m->padding;
entries[i].fieldName = m->memberName;
entries[i].fieldPointer = (void*)ptr;
entries[i].function = decodeJsonJumpTable[mt->typeKind];
entries[i].found = false;
ptr += mt->memSize;
} else {
ptr += m->padding;
ptr += sizeof(size_t);
entries[i].fieldName = m->memberName;
entries[i].fieldPointer = (void*)ptr;
entries[i].function = (decodeJsonSignature)Array_decodeJson;
entries[i].found = false;
ptr += sizeof(void*);
}
}
ret = decodeFields(ctx, parseCtx, entries, membersSize, type);
ctx->depth--;
return ret;
} | 316 | True | 1 |
CVE-2020-36429 | False | False | False | False | AV:L/AC:L/Au:N/C:N/I:N/A:P | LOCAL | LOW | NONE | NONE | NONE | PARTIAL | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | NONE | NONE | HIGH | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/open62541/open62541/commit/c800e2987b10bb3af6ef644b515b5d6392f8861d', 'name': 'https://github.com/open62541/open62541/commit/c800e2987b10bb3af6ef644b515b5d6392f8861d', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=20578', 'name': 'https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=20578', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/google/oss-fuzz-vulns/blob/main/vulns/open62541/OSV-2020-153.yaml', 'name': 'https://github.com/google/oss-fuzz-vulns/blob/main/vulns/open62541/OSV-2020-153.yaml', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/open62541/open62541/compare/v1.0.3...v1.0.4', 'name': 'https://github.com/open62541/open62541/compare/v1.0.3...v1.0.4', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-787'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:open62541:open62541:*:*:*:*:*:*:*:*', 'versionStartIncluding': '1.0.0', 'versionEndExcluding': '1.0.4', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'Variant_encodeJson in open62541 1.x before 1.0.4 has an out-of-bounds write for a large recursion depth.'}] | 2021-07-28T19:28Z | 2021-07-20T07:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Julius Pfrommer | 2020-05-19 15:13:20+02:00 | fix(json): Check max recursion depth in more places | c800e2987b10bb3af6ef644b515b5d6392f8861d | False | open62541/open62541 | Open source implementation of OPC UA (OPC Unified Architecture) aka IEC 62541 licensed under Mozilla Public License v2.0 | 2013-12-20 08:45:05 | 2022-08-26 13:35:19 | http://open62541.org | open62541 | 1865.0 | 962.0 | encodeJsonStructure | encodeJsonStructure( const void * src , const UA_DataType * type , CtxJson * ctx) | ['src', 'type', 'ctx'] | encodeJsonStructure(const void *src, const UA_DataType *type, CtxJson *ctx) {
/* Check the recursion limit */
if(ctx->depth > UA_JSON_ENCODING_MAX_RECURSION)
return UA_STATUSCODE_BADENCODINGERROR;
ctx->depth++;
status ret = writeJsonObjStart(ctx);
uintptr_t ptr = (uintptr_t) src;
u8 membersSize = type->membersSize;
const UA_DataType * typelists[2] = {UA_TYPES, &type[-type->typeIndex]};
for(size_t i = 0; i < membersSize && ret == UA_STATUSCODE_GOOD; ++i) {
const UA_DataTypeMember *m = &type->members[i];
const UA_DataType *mt = &typelists[!m->namespaceZero][m->memberTypeIndex];
if(m->memberName != NULL && *m->memberName != 0)
ret |= writeJsonKey(ctx, m->memberName);
if(!m->isArray) {
ptr += m->padding;
size_t memSize = mt->memSize;
ret |= encodeJsonJumpTable[mt->typeKind]((const void*) ptr, mt, ctx);
ptr += memSize;
} else {
ptr += m->padding;
const size_t length = *((const size_t*) ptr);
ptr += sizeof (size_t);
ret |= encodeJsonArray(ctx, *(void * const *)ptr, length, mt);
ptr += sizeof (void*);
}
}
ret |= writeJsonObjEnd(ctx);
ctx->depth--;
return ret;
} | 276 | True | 1 |
CVE-2022-25761 | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | nan | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | NONE | NONE | HIGH | 7.5 | HIGH | 3.9 | 3.6 | nan | [{'url': 'https://github.com/open62541/open62541/pull/5173', 'name': 'N/A', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://security.snyk.io/vuln/SNYK-UNMANAGED-OPEN62541OPEN62541-2988719', 'name': 'N/A', 'refsource': 'CONFIRM', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/open62541/open62541/releases/tag/v1.2.5', 'name': 'N/A', 'refsource': 'CONFIRM', 'tags': ['Release Notes', 'Third Party Advisory']}, {'url': 'https://github.com/open62541/open62541/commit/b79db1ac78146fc06b0b8435773d3967de2d659c', 'name': 'N/A', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/open62541/open62541/releases/tag/v1.3.1', 'name': 'N/A', 'refsource': 'CONFIRM', 'tags': ['Release Notes', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-770'}]}] | nan | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:open62541:open62541:1.3:rc2:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:open62541:open62541:1.3:rc2-ef:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:open62541:open62541:1.3:rc2-ef2:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:open62541:open62541:1.3:rc1:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:open62541:open62541:*:*:*:*:*:*:*:*', 'versionEndExcluding': '1.2.5', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'The package open62541/open62541 before 1.2.5, from 1.3-rc1 and before 1.3.1 are vulnerable to Denial of Service (DoS) due to a missing limitation on the number of received chunks - per single session or in total for all concurrent sessions. An attacker can exploit this vulnerability by sending an unlimited number of huge chunks (e.g. 2GB each) without sending the Final closing chunk.'}] | 2022-08-25T20:34Z | 2022-08-23T05:15Z | Allocation of Resources Without Limits or Throttling | The software allocates a reusable resource or group of resources on behalf of an actor without imposing any restrictions on the size or number of resources that can be allocated, in violation of the intended security policy for that actor. | Code frequently has to work with limited resources, so programmers must be careful to ensure that resources are not consumed too quickly, or too easily. Without use of quotas, resource limits, or other protection mechanisms, it can be easy for an attacker to consume many resources by rapidly making many requests, or causing larger resources to be used than is needed. When too many resources are allocated, or if a single resource is too large, then it can prevent the code from working correctly, possibly leading to a denial of service.
| https://cwe.mitre.org/data/definitions/770.html | 0 | Julius Pfrommer | 2022-06-04 12:32:41+02:00 | fix(plugin): Add default limits for chunks and message size
Based on a reported DoS vulnerability reported by Team82 (Claroty
Research). | b79db1ac78146fc06b0b8435773d3967de2d659c | False | open62541/open62541 | Open source implementation of OPC UA (OPC Unified Architecture) aka IEC 62541 licensed under Mozilla Public License v2.0 | 2013-12-20 08:45:05 | 2022-08-26 13:35:19 | http://open62541.org | open62541 | 1865.0 | 962.0 | setup_secureChannel | setup_secureChannel( void) | ['void'] | setup_secureChannel(void) {
TestingPolicy(&dummyPolicy, dummyCertificate, &fCalled, &keySizes);
UA_SecureChannel_init(&testChannel, &UA_ConnectionConfig_default);
UA_SecureChannel_setSecurityPolicy(&testChannel, &dummyPolicy, &dummyCertificate);
testingConnection = createDummyConnection(65535, &sentData);
UA_Connection_attachSecureChannel(&testingConnection, &testChannel);
testChannel.connection = &testingConnection;
testChannel.state = UA_SECURECHANNELSTATE_OPEN;
} | 73 | True | 1 |
CVE-2020-5208 | False | False | False | False | AV:N/AC:L/Au:S/C:P/I:P/A:P | NETWORK | LOW | SINGLE | PARTIAL | PARTIAL | PARTIAL | 6.5 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | HIGH | HIGH | HIGH | 8.8 | HIGH | 2.8 | 5.9 | False | [{'url': 'https://github.com/ipmitool/ipmitool/security/advisories/GHSA-g659-9qxw-p7cp', 'name': 'https://github.com/ipmitool/ipmitool/security/advisories/GHSA-g659-9qxw-p7cp', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/ipmitool/ipmitool/commit/e824c23316ae50beb7f7488f2055ac65e8b341f2', 'name': 'https://github.com/ipmitool/ipmitool/commit/e824c23316ae50beb7f7488f2055ac65e8b341f2', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://lists.debian.org/debian-lts-announce/2020/02/msg00006.html', 'name': '[debian-lts-announce] 20200209 [SECURITY] [DLA 2098-1] ipmitool security update', 'refsource': 'MLIST', 'tags': ['Mailing List', 'Third Party Advisory']}, {'url': 'https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/RYYEKUAUTCWICM77HOEGZDVVEUJLP4BP/', 'name': 'FEDORA-2020-92cc67ff5a', 'refsource': 'FEDORA', 'tags': ['Mailing List', 'Third Party Advisory']}, {'url': 'https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/K2BPW66KDP4H36AGZXLED57A3O2Y6EQW/', 'name': 'FEDORA-2020-eb0cf4d268', 'refsource': 'FEDORA', 'tags': ['Mailing List', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-02/msg00031.html', 'name': 'openSUSE-SU-2020:0247', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}, {'url': 'https://security.gentoo.org/glsa/202101-03', 'name': 'GLSA-202101-03', 'refsource': 'GENTOO', 'tags': ['Third Party Advisory']}, {'url': 'https://lists.debian.org/debian-lts-announce/2021/06/msg00029.html', 'name': '[debian-lts-announce] 20210630 [SECURITY] [DLA 2699-1] ipmitool security update', 'refsource': 'MLIST', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:ipmitool_project:ipmitool:1.8.18:*:*:*:*:*:*:*', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:debian:debian_linux:8.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:fedoraproject:fedora:30:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:fedoraproject:fedora:31:*:*:*:*:*:*:*', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.1:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "It's been found that multiple functions in ipmitool before 1.8.19 neglect proper checking of the data received from a remote LAN party, which may lead to buffer overflows and potentially to remote code execution on the ipmitool side. This is especially dangerous if ipmitool is run as a privileged user. This problem is fixed in version 1.8.19."}] | 2021-12-30T21:13Z | 2020-02-05T14:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Chrostoper Ertl | 2019-11-28 16:33:59+00:00 | fru: Fix buffer overflow vulnerabilities
Partial fix for CVE-2020-5208, see
https://github.com/ipmitool/ipmitool/security/advisories/GHSA-g659-9qxw-p7cp
The `read_fru_area_section` function only performs size validation of
requested read size, and falsely assumes that the IPMI message will not
respond with more than the requested amount of data; it uses the
unvalidated response size to copy into `frubuf`. If the response is
larger than the request, this can result in overflowing the buffer.
The same issue affects the `read_fru_area` function. | e824c23316ae50beb7f7488f2055ac65e8b341f2 | False | ipmitool/ipmitool | An open-source tool for controlling IPMI-enabled systems | 2018-04-08 22:18:30 | 2022-08-15 10:38:40 | ipmitool | 734.0 | 246.0 | read_fru_area | read_fru_area( struct ipmi_intf * intf , struct fru_info * fru , uint8_t id , uint32_t offset , uint32_t length , uint8_t * frubuf) | ['intf', 'fru', 'id', 'offset', 'length', 'frubuf'] | read_fru_area(struct ipmi_intf * intf, struct fru_info *fru, uint8_t id,
uint32_t offset, uint32_t length, uint8_t *frubuf)
{
uint32_t off = offset, tmp, finish;
struct ipmi_rs * rsp;
struct ipmi_rq req;
uint8_t msg_data[4];
if (offset > fru->size) {
lprintf(LOG_ERR, "Read FRU Area offset incorrect: %d > %d",
offset, fru->size);
return -1;
}
finish = offset + length;
if (finish > fru->size) {
finish = fru->size;
lprintf(LOG_NOTICE, "Read FRU Area length %d too large, "
"Adjusting to %d",
offset + length, finish - offset);
}
memset(&req, 0, sizeof(req));
req.msg.netfn = IPMI_NETFN_STORAGE;
req.msg.cmd = GET_FRU_DATA;
req.msg.data = msg_data;
req.msg.data_len = 4;
if (fru->max_read_size == 0) {
uint16_t max_rs_size = ipmi_intf_get_max_response_data_size(intf) - 1;
/* validate lower bound of the maximum response data size */
if (max_rs_size <= 1) {
lprintf(LOG_ERROR, "Maximum response size is too small to send "
"a read request");
return -1;
}
/*
* Read FRU Info command may read up to 255 bytes of data.
*/
if (max_rs_size - 1 > 255) {
/* Limit the max read size with 255 bytes. */
fru->max_read_size = 255;
} else {
/* subtract 1 byte for bytes count */
fru->max_read_size = max_rs_size - 1;
}
/* check word access */
if (fru->access) {
fru->max_read_size &= ~1;
}
}
do {
tmp = fru->access ? off >> 1 : off;
msg_data[0] = id;
msg_data[1] = (uint8_t)(tmp & 0xff);
msg_data[2] = (uint8_t)(tmp >> 8);
tmp = finish - off;
if (tmp > fru->max_read_size)
msg_data[3] = (uint8_t)fru->max_read_size;
else
msg_data[3] = (uint8_t)tmp;
rsp = intf->sendrecv(intf, &req);
if (!rsp) {
lprintf(LOG_NOTICE, "FRU Read failed");
break;
}
if (rsp->ccode) {
/* if we get C7h or C8h or CAh return code then we requested too
* many bytes at once so try again with smaller size */
if (fru_cc_rq2big(rsp->ccode)
&& fru->max_read_size > FRU_BLOCK_SZ)
{
if (fru->max_read_size > FRU_AREA_MAXIMUM_BLOCK_SZ) {
/* subtract read length more aggressively */
fru->max_read_size -= FRU_BLOCK_SZ;
} else {
/* subtract length less aggressively */
fru->max_read_size--;
}
lprintf(LOG_INFO, "Retrying FRU read with request size %d",
fru->max_read_size);
continue;
}
lprintf(LOG_NOTICE, "FRU Read failed: %s",
val2str(rsp->ccode, completion_code_vals));
break;
}
tmp = fru->access ? rsp->data[0] << 1 : rsp->data[0];
memcpy(frubuf, rsp->data + 1, tmp);
off += tmp;
frubuf += tmp;
/* sometimes the size returned in the Info command
* is too large. return 0 so higher level function
* still attempts to parse what was returned */
if (tmp == 0 && off < finish) {
return 0;
}
} while (off < finish);
if (off < finish) {
return -1;
}
return 0;
} | 520 | True | 1 |
|
CVE-2020-5208 | False | False | False | False | AV:N/AC:L/Au:S/C:P/I:P/A:P | NETWORK | LOW | SINGLE | PARTIAL | PARTIAL | PARTIAL | 6.5 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | LOW | NONE | UNCHANGED | HIGH | HIGH | HIGH | 8.8 | HIGH | 2.8 | 5.9 | False | [{'url': 'https://github.com/ipmitool/ipmitool/security/advisories/GHSA-g659-9qxw-p7cp', 'name': 'https://github.com/ipmitool/ipmitool/security/advisories/GHSA-g659-9qxw-p7cp', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/ipmitool/ipmitool/commit/e824c23316ae50beb7f7488f2055ac65e8b341f2', 'name': 'https://github.com/ipmitool/ipmitool/commit/e824c23316ae50beb7f7488f2055ac65e8b341f2', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://lists.debian.org/debian-lts-announce/2020/02/msg00006.html', 'name': '[debian-lts-announce] 20200209 [SECURITY] [DLA 2098-1] ipmitool security update', 'refsource': 'MLIST', 'tags': ['Mailing List', 'Third Party Advisory']}, {'url': 'https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/RYYEKUAUTCWICM77HOEGZDVVEUJLP4BP/', 'name': 'FEDORA-2020-92cc67ff5a', 'refsource': 'FEDORA', 'tags': ['Mailing List', 'Third Party Advisory']}, {'url': 'https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/K2BPW66KDP4H36AGZXLED57A3O2Y6EQW/', 'name': 'FEDORA-2020-eb0cf4d268', 'refsource': 'FEDORA', 'tags': ['Mailing List', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-02/msg00031.html', 'name': 'openSUSE-SU-2020:0247', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}, {'url': 'https://security.gentoo.org/glsa/202101-03', 'name': 'GLSA-202101-03', 'refsource': 'GENTOO', 'tags': ['Third Party Advisory']}, {'url': 'https://lists.debian.org/debian-lts-announce/2021/06/msg00029.html', 'name': '[debian-lts-announce] 20210630 [SECURITY] [DLA 2699-1] ipmitool security update', 'refsource': 'MLIST', 'tags': ['Mailing List', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:ipmitool_project:ipmitool:1.8.18:*:*:*:*:*:*:*', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:debian:debian_linux:8.0:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:fedoraproject:fedora:30:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:fedoraproject:fedora:31:*:*:*:*:*:*:*', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.1:*:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "It's been found that multiple functions in ipmitool before 1.8.19 neglect proper checking of the data received from a remote LAN party, which may lead to buffer overflows and potentially to remote code execution on the ipmitool side. This is especially dangerous if ipmitool is run as a privileged user. This problem is fixed in version 1.8.19."}] | 2021-12-30T21:13Z | 2020-02-05T14:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Chrostoper Ertl | 2019-11-28 16:33:59+00:00 | fru: Fix buffer overflow vulnerabilities
Partial fix for CVE-2020-5208, see
https://github.com/ipmitool/ipmitool/security/advisories/GHSA-g659-9qxw-p7cp
The `read_fru_area_section` function only performs size validation of
requested read size, and falsely assumes that the IPMI message will not
respond with more than the requested amount of data; it uses the
unvalidated response size to copy into `frubuf`. If the response is
larger than the request, this can result in overflowing the buffer.
The same issue affects the `read_fru_area` function. | e824c23316ae50beb7f7488f2055ac65e8b341f2 | False | ipmitool/ipmitool | An open-source tool for controlling IPMI-enabled systems | 2018-04-08 22:18:30 | 2022-08-15 10:38:40 | ipmitool | 734.0 | 246.0 | read_fru_area_section | read_fru_area_section( struct ipmi_intf * intf , struct fru_info * fru , uint8_t id , uint32_t offset , uint32_t length , uint8_t * frubuf) | ['intf', 'fru', 'id', 'offset', 'length', 'frubuf'] | read_fru_area_section(struct ipmi_intf * intf, struct fru_info *fru, uint8_t id,
uint32_t offset, uint32_t length, uint8_t *frubuf)
{
static uint32_t fru_data_rqst_size = 20;
uint32_t off = offset, tmp, finish;
struct ipmi_rs * rsp;
struct ipmi_rq req;
uint8_t msg_data[4];
if (offset > fru->size) {
lprintf(LOG_ERR, "Read FRU Area offset incorrect: %d > %d",
offset, fru->size);
return -1;
}
finish = offset + length;
if (finish > fru->size) {
finish = fru->size;
lprintf(LOG_NOTICE, "Read FRU Area length %d too large, "
"Adjusting to %d",
offset + length, finish - offset);
}
memset(&req, 0, sizeof(req));
req.msg.netfn = IPMI_NETFN_STORAGE;
req.msg.cmd = GET_FRU_DATA;
req.msg.data = msg_data;
req.msg.data_len = 4;
#ifdef LIMIT_ALL_REQUEST_SIZE
if (fru_data_rqst_size > 16)
#else
if (fru->access && fru_data_rqst_size > 16)
#endif
fru_data_rqst_size = 16;
do {
tmp = fru->access ? off >> 1 : off;
msg_data[0] = id;
msg_data[1] = (uint8_t)(tmp & 0xff);
msg_data[2] = (uint8_t)(tmp >> 8);
tmp = finish - off;
if (tmp > fru_data_rqst_size)
msg_data[3] = (uint8_t)fru_data_rqst_size;
else
msg_data[3] = (uint8_t)tmp;
rsp = intf->sendrecv(intf, &req);
if (!rsp) {
lprintf(LOG_NOTICE, "FRU Read failed");
break;
}
if (rsp->ccode) {
/* if we get C7 or C8 or CA return code then we requested too
* many bytes at once so try again with smaller size */
if (fru_cc_rq2big(rsp->ccode) && (--fru_data_rqst_size > FRU_BLOCK_SZ)) {
lprintf(LOG_INFO,
"Retrying FRU read with request size %d",
fru_data_rqst_size);
continue;
}
lprintf(LOG_NOTICE, "FRU Read failed: %s",
val2str(rsp->ccode, completion_code_vals));
break;
}
tmp = fru->access ? rsp->data[0] << 1 : rsp->data[0];
memcpy((frubuf + off)-offset, rsp->data + 1, tmp);
off += tmp;
/* sometimes the size returned in the Info command
* is too large. return 0 so higher level function
* still attempts to parse what was returned */
if (tmp == 0 && off < finish)
return 0;
} while (off < finish);
if (off < finish)
return -1;
return 0;
} | 434 | True | 1 |
|
CVE-2020-6016 | False | False | False | False | AV:N/AC:L/Au:N/C:C/I:C/A:C | NETWORK | LOW | NONE | COMPLETE | COMPLETE | COMPLETE | 10.0 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/ValveSoftware/GameNetworkingSockets/commit/e0c86dcb9139771db3db0cfdb1fb8bef0af19c43', 'name': 'https://github.com/ValveSoftware/GameNetworkingSockets/commit/e0c86dcb9139771db3db0cfdb1fb8bef0af19c43', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://research.checkpoint.com/2020/game-on-finding-vulnerabilities-in-valves-steam-sockets/', 'name': 'https://research.checkpoint.com/2020/game-on-finding-vulnerabilities-in-valves-steam-sockets/', 'refsource': 'MISC', 'tags': []}] | [{'description': [{'lang': 'en', 'value': 'CWE-787'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:valvesoftware:game_networking_sockets:*:*:*:*:*:*:*:*', 'versionEndExcluding': '1.2.0', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "Valve's Game Networking Sockets prior to version v1.2.0 improperly handles unreliable segments with negative offsets in function SNP_ReceiveUnreliableSegment(), leading to a Heap-Based Buffer Underflow and a free() of memory not from the heap, resulting in a memory corruption and probably even a remote code execution."}] | 2020-12-10T23:15Z | 2020-11-18T15:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Fletcher Dunn | 2020-09-03 11:01:54-07:00 | Drop unreliable segments with weird offset/size.
And be more deliberate about limits of unreliable message/segment sizes. | e0c86dcb9139771db3db0cfdb1fb8bef0af19c43 | False | ValveSoftware/GameNetworkingSockets | Reliable & unreliable messages over UDP. Robust message fragmentation & reassembly. P2P networking / NAT traversal. Encryption. | 2018-03-21 18:43:20 | 2022-08-12 17:06:45 | ValveSoftware | 6182.0 | 458.0 | SteamNetworkingSocketsLib::CSteamNetworkConnectionBase::ProcessPlainTextDataChunk | SteamNetworkingSocketsLib::CSteamNetworkConnectionBase::ProcessPlainTextDataChunk( int usecTimeSinceLast , RecvPacketContext_t & ctx) | ['usecTimeSinceLast', 'ctx'] | bool CSteamNetworkConnectionBase::ProcessPlainTextDataChunk( int usecTimeSinceLast, RecvPacketContext_t &ctx )
{
#define DECODE_ERROR( ... ) do { \
ConnectionState_ProblemDetectedLocally( k_ESteamNetConnectionEnd_Misc_InternalError, __VA_ARGS__ ); \
return false; } while(false)
#define EXPECT_BYTES(n,pszWhatFor) \
do { \
if ( pDecode + (n) > pEnd ) \
DECODE_ERROR( "SNP decode overrun, %d bytes for %s", (n), pszWhatFor ); \
} while (false)
#define READ_8BITU( var, pszWhatFor ) \
do { EXPECT_BYTES(1,pszWhatFor); var = *(uint8 *)pDecode; pDecode += 1; } while(false)
#define READ_16BITU( var, pszWhatFor ) \
do { EXPECT_BYTES(2,pszWhatFor); var = LittleWord(*(uint16 *)pDecode); pDecode += 2; } while(false)
#define READ_24BITU( var, pszWhatFor ) \
do { EXPECT_BYTES(3,pszWhatFor); \
var = *(uint8 *)pDecode; pDecode += 1; \
var |= uint32( LittleWord(*(uint16 *)pDecode) ) << 8U; pDecode += 2; \
} while(false)
#define READ_32BITU( var, pszWhatFor ) \
do { EXPECT_BYTES(4,pszWhatFor); var = LittleDWord(*(uint32 *)pDecode); pDecode += 4; } while(false)
#define READ_48BITU( var, pszWhatFor ) \
do { EXPECT_BYTES(6,pszWhatFor); \
var = LittleWord( *(uint16 *)pDecode ); pDecode += 2; \
var |= uint64( LittleDWord(*(uint32 *)pDecode) ) << 16U; pDecode += 4; \
} while(false)
#define READ_64BITU( var, pszWhatFor ) \
do { EXPECT_BYTES(8,pszWhatFor); var = LittleQWord(*(uint64 *)pDecode); pDecode += 8; } while(false)
#define READ_VARINT( var, pszWhatFor ) \
do { pDecode = DeserializeVarInt( pDecode, pEnd, var ); if ( !pDecode ) { DECODE_ERROR( "SNP data chunk decode overflow, varint for %s", pszWhatFor ); } } while(false)
#define READ_SEGMENT_DATA_SIZE( is_reliable ) \
int cbSegmentSize; \
{ \
int sizeFlags = nFrameType & 7; \
if ( sizeFlags <= 4 ) \
{ \
uint8 lowerSizeBits; \
READ_8BITU( lowerSizeBits, #is_reliable " size lower bits" ); \
cbSegmentSize = (sizeFlags<<8) + lowerSizeBits; \
if ( pDecode + cbSegmentSize > pEnd ) \
{ \
DECODE_ERROR( "SNP decode overrun %d bytes for %s segment data.", cbSegmentSize, #is_reliable ); \
} \
} \
else if ( sizeFlags == 7 ) \
{ \
cbSegmentSize = pEnd - pDecode; \
} \
else \
{ \
DECODE_ERROR( "Invalid SNP frame lead byte 0x%02x. (size bits)", nFrameType ); \
} \
} \
const uint8 *pSegmentData = pDecode; \
pDecode += cbSegmentSize;
// Make sure we have initialized the connection
Assert( BStateIsActive() );
const SteamNetworkingMicroseconds usecNow = ctx.m_usecNow;
const int64 nPktNum = ctx.m_nPktNum;
bool bInhibitMarkReceived = false;
const int nLogLevelPacketDecode = m_connectionConfig.m_LogLevel_PacketDecode.Get();
SpewVerboseGroup( nLogLevelPacketDecode, "[%s] decode pkt %lld\n", GetDescription(), (long long)nPktNum );
// Decode frames until we get to the end of the payload
const byte *pDecode = (const byte *)ctx.m_pPlainText;
const byte *pEnd = pDecode + ctx.m_cbPlainText;
int64 nCurMsgNum = 0;
int64 nDecodeReliablePos = 0;
while ( pDecode < pEnd )
{
uint8 nFrameType = *pDecode;
++pDecode;
if ( ( nFrameType & 0xc0 ) == 0x00 )
{
//
// Unreliable segment
//
// Decode message number
if ( nCurMsgNum == 0 )
{
// First unreliable frame. Message number is absolute, but only bottom N bits are sent
static const char szUnreliableMsgNumOffset[] = "unreliable msgnum";
int64 nLowerBits, nMask;
if ( nFrameType & 0x10 )
{
READ_32BITU( nLowerBits, szUnreliableMsgNumOffset );
nMask = 0xffffffff;
nCurMsgNum = NearestWithSameLowerBits( (int32)nLowerBits, m_receiverState.m_nHighestSeenMsgNum );
}
else
{
READ_16BITU( nLowerBits, szUnreliableMsgNumOffset );
nMask = 0xffff;
nCurMsgNum = NearestWithSameLowerBits( (int16)nLowerBits, m_receiverState.m_nHighestSeenMsgNum );
}
Assert( ( nCurMsgNum & nMask ) == nLowerBits );
if ( nCurMsgNum <= 0 )
{
DECODE_ERROR( "SNP decode unreliable msgnum underflow. %llx mod %llx, highest seen %llx",
(unsigned long long)nLowerBits, (unsigned long long)( nMask+1 ), (unsigned long long)m_receiverState.m_nHighestSeenMsgNum );
}
if ( std::abs( nCurMsgNum - m_receiverState.m_nHighestSeenMsgNum ) > (nMask>>2) )
{
// We really should never get close to this boundary.
SpewWarningRateLimited( usecNow, "Sender sent abs unreliable message number using %llx mod %llx, highest seen %llx\n",
(unsigned long long)nLowerBits, (unsigned long long)( nMask+1 ), (unsigned long long)m_receiverState.m_nHighestSeenMsgNum );
}
}
else
{
if ( nFrameType & 0x10 )
{
uint64 nMsgNumOffset;
READ_VARINT( nMsgNumOffset, "unreliable msgnum offset" );
nCurMsgNum += nMsgNumOffset;
}
else
{
++nCurMsgNum;
}
}
if ( nCurMsgNum > m_receiverState.m_nHighestSeenMsgNum )
m_receiverState.m_nHighestSeenMsgNum = nCurMsgNum;
//
// Decode segment offset in message
//
uint32 nOffset = 0;
if ( nFrameType & 0x08 )
READ_VARINT( nOffset, "unreliable data offset" );
//
// Decode size, locate segment data
//
READ_SEGMENT_DATA_SIZE( unreliable )
Assert( cbSegmentSize > 0 ); // !TEST! Bogus assert, zero byte messages are OK. Remove after testing
// Receive the segment
bool bLastSegmentInMessage = ( nFrameType & 0x20 ) != 0;
SNP_ReceiveUnreliableSegment( nCurMsgNum, nOffset, pSegmentData, cbSegmentSize, bLastSegmentInMessage, usecNow );
}
else if ( ( nFrameType & 0xe0 ) == 0x40 )
{
//
// Reliable segment
//
// First reliable segment?
if ( nDecodeReliablePos == 0 )
{
// Stream position is absolute. How many bits?
static const char szFirstReliableStreamPos[] = "first reliable streampos";
int64 nOffset, nMask;
switch ( nFrameType & (3<<3) )
{
case 0<<3: READ_24BITU( nOffset, szFirstReliableStreamPos ); nMask = (1ll<<24)-1; break;
case 1<<3: READ_32BITU( nOffset, szFirstReliableStreamPos ); nMask = (1ll<<32)-1; break;
case 2<<3: READ_48BITU( nOffset, szFirstReliableStreamPos ); nMask = (1ll<<48)-1; break;
default: DECODE_ERROR( "Reserved reliable stream pos size" );
}
// What do we expect to receive next?
int64 nExpectNextStreamPos = m_receiverState.m_nReliableStreamPos + len( m_receiverState.m_bufReliableStream );
// Find the stream offset closest to that
nDecodeReliablePos = ( nExpectNextStreamPos & ~nMask ) + nOffset;
if ( nDecodeReliablePos + (nMask>>1) < nExpectNextStreamPos )
{
nDecodeReliablePos += nMask+1;
Assert( ( nDecodeReliablePos & nMask ) == nOffset );
Assert( nExpectNextStreamPos < nDecodeReliablePos );
Assert( nExpectNextStreamPos + (nMask>>1) >= nDecodeReliablePos );
}
if ( nDecodeReliablePos <= 0 )
{
DECODE_ERROR( "SNP decode first reliable stream pos underflow. %llx mod %llx, expected next %llx",
(unsigned long long)nOffset, (unsigned long long)( nMask+1 ), (unsigned long long)nExpectNextStreamPos );
}
if ( std::abs( nDecodeReliablePos - nExpectNextStreamPos ) > (nMask>>2) )
{
// We really should never get close to this boundary.
SpewWarningRateLimited( usecNow, "Sender sent reliable stream pos using %llx mod %llx, expected next %llx\n",
(unsigned long long)nOffset, (unsigned long long)( nMask+1 ), (unsigned long long)nExpectNextStreamPos );
}
}
else
{
// Subsequent reliable message encode the position as an offset from previous.
static const char szOtherReliableStreamPos[] = "reliable streampos offset";
int64 nOffset;
switch ( nFrameType & (3<<3) )
{
case 0<<3: nOffset = 0; break;
case 1<<3: READ_8BITU( nOffset, szOtherReliableStreamPos ); break;
case 2<<3: READ_16BITU( nOffset, szOtherReliableStreamPos ); break;
default: READ_32BITU( nOffset, szOtherReliableStreamPos ); break;
}
nDecodeReliablePos += nOffset;
}
//
// Decode size, locate segment data
//
READ_SEGMENT_DATA_SIZE( reliable )
// Ingest the segment.
if ( !SNP_ReceiveReliableSegment( nPktNum, nDecodeReliablePos, pSegmentData, cbSegmentSize, usecNow ) )
{
if ( !BStateIsActive() )
return false; // we decided to nuke the connection - abort packet processing
// We're not able to ingest this reliable segment at the moment,
// but we didn't terminate the connection. So do not ack this packet
// to the peer. We need them to retransmit
bInhibitMarkReceived = true;
}
// Advance pointer for the next reliable segment, if any.
nDecodeReliablePos += cbSegmentSize;
// Decoding rules state that if we have established a message number,
// (from an earlier unreliable message), then we advance it.
if ( nCurMsgNum > 0 )
++nCurMsgNum;
}
else if ( ( nFrameType & 0xfc ) == 0x80 )
{
//
// Stop waiting
//
int64 nOffset = 0;
static const char szStopWaitingOffset[] = "stop_waiting offset";
switch ( nFrameType & 3 )
{
case 0: READ_8BITU( nOffset, szStopWaitingOffset ); break;
case 1: READ_16BITU( nOffset, szStopWaitingOffset ); break;
case 2: READ_24BITU( nOffset, szStopWaitingOffset ); break;
case 3: READ_64BITU( nOffset, szStopWaitingOffset ); break;
}
if ( nOffset >= nPktNum )
{
DECODE_ERROR( "stop_waiting pktNum %llu offset %llu", nPktNum, nOffset );
}
++nOffset;
int64 nMinPktNumToSendAcks = nPktNum-nOffset;
if ( nMinPktNumToSendAcks == m_receiverState.m_nMinPktNumToSendAcks )
continue;
if ( nMinPktNumToSendAcks < m_receiverState.m_nMinPktNumToSendAcks )
{
// Sender must never reduce this number! Check for bugs or bogus sender
if ( nPktNum >= m_receiverState.m_nPktNumUpdatedMinPktNumToSendAcks )
{
DECODE_ERROR( "SNP stop waiting reduced %lld (pkt %lld) -> %lld (pkt %lld)",
(long long)m_receiverState.m_nMinPktNumToSendAcks,
(long long)m_receiverState.m_nPktNumUpdatedMinPktNumToSendAcks,
(long long)nMinPktNumToSendAcks,
(long long)nPktNum
);
}
continue;
}
SpewDebugGroup( nLogLevelPacketDecode, "[%s] decode pkt %lld stop waiting: %lld (was %lld)",
GetDescription(),
(long long)nPktNum,
(long long)nMinPktNumToSendAcks, (long long)m_receiverState.m_nMinPktNumToSendAcks );
m_receiverState.m_nMinPktNumToSendAcks = nMinPktNumToSendAcks;
m_receiverState.m_nPktNumUpdatedMinPktNumToSendAcks = nPktNum;
// Trim from the front of the packet gap list,
// we can stop reporting these losses to the sender
auto h = m_receiverState.m_mapPacketGaps.begin();
while ( h->first <= m_receiverState.m_nMinPktNumToSendAcks )
{
if ( h->second.m_nEnd > m_receiverState.m_nMinPktNumToSendAcks )
{
// Ug. You're not supposed to modify the key in a map.
// I suppose that's legit, since you could violate the ordering.
// but in this case I know that this change is OK.
const_cast<int64 &>( h->first ) = m_receiverState.m_nMinPktNumToSendAcks;
break;
}
// Were we pending an ack on this?
if ( m_receiverState.m_itPendingAck == h )
++m_receiverState.m_itPendingAck;
// Were we pending a nack on this?
if ( m_receiverState.m_itPendingNack == h )
{
// I am not sure this is even possible.
AssertMsg( false, "Expiring packet gap, which had pending NACK" );
// But just in case, this would be the proper action
++m_receiverState.m_itPendingNack;
}
// Packet loss is in the past. Forget about it and move on
h = m_receiverState.m_mapPacketGaps.erase(h);
}
}
else if ( ( nFrameType & 0xf0 ) == 0x90 )
{
//
// Ack
//
#if STEAMNETWORKINGSOCKETS_SNP_PARANOIA > 0
m_senderState.DebugCheckInFlightPacketMap();
#if STEAMNETWORKINGSOCKETS_SNP_PARANOIA == 1
if ( ( nPktNum & 255 ) == 0 ) // only do it periodically
#endif
{
m_senderState.DebugCheckInFlightPacketMap();
}
#endif
// Parse latest received sequence number
int64 nLatestRecvSeqNum;
{
static const char szAckLatestPktNum[] = "ack latest pktnum";
int64 nLowerBits, nMask;
if ( nFrameType & 0x40 )
{
READ_32BITU( nLowerBits, szAckLatestPktNum );
nMask = 0xffffffff;
nLatestRecvSeqNum = NearestWithSameLowerBits( (int32)nLowerBits, m_statsEndToEnd.m_nNextSendSequenceNumber );
}
else
{
READ_16BITU( nLowerBits, szAckLatestPktNum );
nMask = 0xffff;
nLatestRecvSeqNum = NearestWithSameLowerBits( (int16)nLowerBits, m_statsEndToEnd.m_nNextSendSequenceNumber );
}
Assert( ( nLatestRecvSeqNum & nMask ) == nLowerBits );
// Find the message number that is closes to
if ( nLatestRecvSeqNum < 0 )
{
DECODE_ERROR( "SNP decode ack latest pktnum underflow. %llx mod %llx, next send %llx",
(unsigned long long)nLowerBits, (unsigned long long)( nMask+1 ), (unsigned long long)m_statsEndToEnd.m_nNextSendSequenceNumber );
}
if ( std::abs( nLatestRecvSeqNum - m_statsEndToEnd.m_nNextSendSequenceNumber ) > (nMask>>2) )
{
// We really should never get close to this boundary.
SpewWarningRateLimited( usecNow, "Sender sent abs latest recv pkt number using %llx mod %llx, next send %llx\n",
(unsigned long long)nLowerBits, (unsigned long long)( nMask+1 ), (unsigned long long)m_statsEndToEnd.m_nNextSendSequenceNumber );
}
if ( nLatestRecvSeqNum >= m_statsEndToEnd.m_nNextSendSequenceNumber )
{
DECODE_ERROR( "SNP decode ack latest pktnum %lld (%llx mod %llx), but next outoing packet is %lld (%llx).",
(long long)nLatestRecvSeqNum, (unsigned long long)nLowerBits, (unsigned long long)( nMask+1 ),
(long long)m_statsEndToEnd.m_nNextSendSequenceNumber, (unsigned long long)m_statsEndToEnd.m_nNextSendSequenceNumber
);
}
}
SpewDebugGroup( nLogLevelPacketDecode, "[%s] decode pkt %lld latest recv %lld\n",
GetDescription(),
(long long)nPktNum, (long long)nLatestRecvSeqNum
);
// Locate our bookkeeping for this packet, or the latest one before it
// Remember, we have a sentinel with a low, invalid packet number
Assert( !m_senderState.m_mapInFlightPacketsByPktNum.empty() );
auto inFlightPkt = m_senderState.m_mapInFlightPacketsByPktNum.upper_bound( nLatestRecvSeqNum );
--inFlightPkt;
Assert( inFlightPkt->first <= nLatestRecvSeqNum );
// Parse out delay, and process the ping
{
uint16 nPackedDelay;
READ_16BITU( nPackedDelay, "ack delay" );
if ( nPackedDelay != 0xffff && inFlightPkt->first == nLatestRecvSeqNum && inFlightPkt->second.m_pTransport == ctx.m_pTransport )
{
SteamNetworkingMicroseconds usecDelay = SteamNetworkingMicroseconds( nPackedDelay ) << k_nAckDelayPrecisionShift;
SteamNetworkingMicroseconds usecElapsed = usecNow - inFlightPkt->second.m_usecWhenSent;
Assert( usecElapsed >= 0 );
// Account for their reported delay, and calculate ping, in MS
int msPing = ( usecElapsed - usecDelay ) / 1000;
// Does this seem bogus? (We allow a small amount of slop.)
// NOTE: A malicious sender could lie about this delay, tricking us
// into thinking that the real network latency is low, they are just
// delaying their replies. This actually matters, since the ping time
// is an input into the rate calculation. So we might need to
// occasionally send pings that require an immediately reply, and
// if those ping times seem way out of whack with the ones where they are
// allowed to send a delay, take action against them.
if ( msPing < -1 || msPing > 2000 )
{
// Either they are lying or some weird timer stuff is happening.
// Either way, discard it.
SpewMsgGroup( m_connectionConfig.m_LogLevel_AckRTT.Get(), "[%s] decode pkt %lld latest recv %lld delay %lluusec INVALID ping %lldusec\n",
GetDescription(),
(long long)nPktNum, (long long)nLatestRecvSeqNum,
(unsigned long long)usecDelay,
(long long)usecElapsed
);
}
else
{
// Clamp, if we have slop
if ( msPing < 0 )
msPing = 0;
ProcessSNPPing( msPing, ctx );
// Spew
SpewVerboseGroup( m_connectionConfig.m_LogLevel_AckRTT.Get(), "[%s] decode pkt %lld latest recv %lld delay %.1fms elapsed %.1fms ping %dms\n",
GetDescription(),
(long long)nPktNum, (long long)nLatestRecvSeqNum,
(float)(usecDelay * 1e-3 ),
(float)(usecElapsed * 1e-3 ),
msPing
);
}
}
}
// Parse number of blocks
int nBlocks = nFrameType&7;
if ( nBlocks == 7 )
READ_8BITU( nBlocks, "ack num blocks" );
// If they actually sent us any blocks, that means they are fragmented.
// We should make sure and tell them to stop sending us these nacks
// and move forward.
if ( nBlocks > 0 )
{
// Decrease flush delay the more blocks they send us.
// FIXME - This is not an optimal way to do this. Forcing us to
// ack everything is not what we want to do. Instead, we should
// use a separate timer for when we need to flush out a stop_waiting
// packet!
SteamNetworkingMicroseconds usecDelay = 250*1000 / nBlocks;
QueueFlushAllAcks( usecNow + usecDelay );
}
// Process ack blocks, working backwards from the latest received sequence number.
// Note that we have to parse all this stuff out, even if it's old news (packets older
// than the stop_aiting value we sent), because we need to do that to get to the rest
// of the packet.
bool bAckedReliableRange = false;
int64 nPktNumAckEnd = nLatestRecvSeqNum+1;
while ( nBlocks >= 0 )
{
// Parse out number of acks/nacks.
// Have we parsed all the real blocks?
int64 nPktNumAckBegin, nPktNumNackBegin;
if ( nBlocks == 0 )
{
// Implicit block. Everything earlier between the last
// NACK and the stop_waiting value is implicitly acked!
if ( nPktNumAckEnd <= m_senderState.m_nMinPktWaitingOnAck )
break;
nPktNumAckBegin = m_senderState.m_nMinPktWaitingOnAck;
nPktNumNackBegin = nPktNumAckBegin;
SpewDebugGroup( nLogLevelPacketDecode, "[%s] decode pkt %lld ack last block ack begin %lld\n",
GetDescription(),
(long long)nPktNum, (long long)nPktNumAckBegin );
}
else
{
uint8 nBlockHeader;
READ_8BITU( nBlockHeader, "ack block header" );
// Ack count?
int64 numAcks = ( nBlockHeader>> 4 ) & 7;
if ( nBlockHeader & 0x80 )
{
uint64 nUpperBits;
READ_VARINT( nUpperBits, "ack count upper bits" );
if ( nUpperBits > 100000 )
DECODE_ERROR( "Ack count of %llu<<3 is crazy", (unsigned long long)nUpperBits );
numAcks |= nUpperBits<<3;
}
nPktNumAckBegin = nPktNumAckEnd - numAcks;
if ( nPktNumAckBegin < 0 )
DECODE_ERROR( "Ack range underflow, end=%lld, num=%lld", (long long)nPktNumAckEnd, (long long)numAcks );
// Extended nack count?
int64 numNacks = nBlockHeader & 7;
if ( nBlockHeader & 0x08)
{
uint64 nUpperBits;
READ_VARINT( nUpperBits, "nack count upper bits" );
if ( nUpperBits > 100000 )
DECODE_ERROR( "Nack count of %llu<<3 is crazy", nUpperBits );
numNacks |= nUpperBits<<3;
}
nPktNumNackBegin = nPktNumAckBegin - numNacks;
if ( nPktNumNackBegin < 0 )
DECODE_ERROR( "Nack range underflow, end=%lld, num=%lld", (long long)nPktNumAckBegin, (long long)numAcks );
SpewDebugGroup( nLogLevelPacketDecode, "[%s] decode pkt %lld nack [%lld,%lld) ack [%lld,%lld)\n",
GetDescription(),
(long long)nPktNum,
(long long)nPktNumNackBegin, (long long)( nPktNumNackBegin + numNacks ),
(long long)nPktNumAckBegin, (long long)( nPktNumAckBegin + numAcks )
);
}
// Process acks first.
Assert( nPktNumAckBegin >= 0 );
while ( inFlightPkt->first >= nPktNumAckBegin )
{
Assert( inFlightPkt->first < nPktNumAckEnd );
// Scan reliable segments, and see if any are marked for retry or are in flight
for ( const SNPRange_t &relRange: inFlightPkt->second.m_vecReliableSegments )
{
// If range is present, it should be in only one of these two tables.
if ( m_senderState.m_listInFlightReliableRange.erase( relRange ) == 0 )
{
if ( m_senderState.m_listReadyRetryReliableRange.erase( relRange ) > 0 )
{
// When we put stuff into the reliable retry list, we mark it as pending again.
// But now it's acked, so it's no longer pending, even though we didn't send it.
m_senderState.m_cbPendingReliable -= int( relRange.length() );
Assert( m_senderState.m_cbPendingReliable >= 0 );
bAckedReliableRange = true;
}
}
else
{
bAckedReliableRange = true;
Assert( m_senderState.m_listReadyRetryReliableRange.count( relRange ) == 0 );
}
}
// Check if this was the next packet we were going to timeout, then advance
// pointer. This guy didn't timeout.
if ( inFlightPkt == m_senderState.m_itNextInFlightPacketToTimeout )
++m_senderState.m_itNextInFlightPacketToTimeout;
// No need to track this anymore, remove from our table
inFlightPkt = m_senderState.m_mapInFlightPacketsByPktNum.erase( inFlightPkt );
--inFlightPkt;
m_senderState.MaybeCheckInFlightPacketMap();
}
// Ack of in-flight end-to-end stats?
if ( nPktNumAckBegin <= m_statsEndToEnd.m_pktNumInFlight && m_statsEndToEnd.m_pktNumInFlight < nPktNumAckEnd )
m_statsEndToEnd.InFlightPktAck( usecNow );
// Process nacks.
Assert( nPktNumNackBegin >= 0 );
while ( inFlightPkt->first >= nPktNumNackBegin )
{
Assert( inFlightPkt->first < nPktNumAckEnd );
SNP_SenderProcessPacketNack( inFlightPkt->first, inFlightPkt->second, "NACK" );
// We'll keep the record on hand, though, in case an ACK comes in
--inFlightPkt;
}
// Continue on to the the next older block
nPktNumAckEnd = nPktNumNackBegin;
--nBlocks;
}
// Should we check for discarding reliable messages we are keeping around in case
// of retransmission, since we know now that they were delivered?
if ( bAckedReliableRange )
{
m_senderState.RemoveAckedReliableMessageFromUnackedList();
// Spew where we think the peer is decoding the reliable stream
if ( nLogLevelPacketDecode >= k_ESteamNetworkingSocketsDebugOutputType_Debug )
{
int64 nPeerReliablePos = m_senderState.m_nReliableStreamPos;
if ( !m_senderState.m_listInFlightReliableRange.empty() )
nPeerReliablePos = std::min( nPeerReliablePos, m_senderState.m_listInFlightReliableRange.begin()->first.m_nBegin );
if ( !m_senderState.m_listReadyRetryReliableRange.empty() )
nPeerReliablePos = std::min( nPeerReliablePos, m_senderState.m_listReadyRetryReliableRange.begin()->first.m_nBegin );
SpewDebugGroup( nLogLevelPacketDecode, "[%s] decode pkt %lld peer reliable pos = %lld\n",
GetDescription(),
(long long)nPktNum, (long long)nPeerReliablePos );
}
}
// Check if any of this was new info, then advance our stop_waiting value.
if ( nLatestRecvSeqNum > m_senderState.m_nMinPktWaitingOnAck )
{
SpewVerboseGroup( nLogLevelPacketDecode, "[%s] updating min_waiting_on_ack %lld -> %lld\n",
GetDescription(),
(long long)m_senderState.m_nMinPktWaitingOnAck, (long long)nLatestRecvSeqNum );
m_senderState.m_nMinPktWaitingOnAck = nLatestRecvSeqNum;
}
}
else
{
DECODE_ERROR( "Invalid SNP frame lead byte 0x%02x", nFrameType );
}
}
// Should we record that we received it?
if ( bInhibitMarkReceived )
{
// Something really odd. High packet loss / fragmentation.
// Potentially the peer is being abusive and we need
// to protect ourselves.
//
// Act as if the packet was dropped. This will cause the
// peer's sender logic to interpret this as additional packet
// loss and back off. That's a feature, not a bug.
}
else
{
// Update structures needed to populate our ACKs.
// If we received reliable data now, then schedule an ack
bool bScheduleAck = nDecodeReliablePos > 0;
SNP_RecordReceivedPktNum( nPktNum, usecNow, bScheduleAck );
}
// Track end-to-end flow. Even if we decided to tell our peer that
// we did not receive this, we want our own stats to reflect
// that we did. (And we want to be able to quickly reject a
// packet with this same number.)
//
// Also, note that order of operations is important. This call must
// happen after the SNP_RecordReceivedPktNum call above
m_statsEndToEnd.TrackProcessSequencedPacket( nPktNum, usecNow, usecTimeSinceLast );
// Packet can be processed further
return true;
// Make sure these don't get used beyond where we intended them to get used
#undef DECODE_ERROR
#undef EXPECT_BYTES
#undef READ_8BITU
#undef READ_16BITU
#undef READ_24BITU
#undef READ_32BITU
#undef READ_64BITU
#undef READ_VARINT
#undef READ_SEGMENT_DATA_SIZE
} | 2484 | True | 1 |
|
CVE-2020-6017 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/ValveSoftware/GameNetworkingSockets/commit/e0c86dcb9139771db3db0cfdb1fb8bef0af19c43', 'name': 'https://github.com/ValveSoftware/GameNetworkingSockets/commit/e0c86dcb9139771db3db0cfdb1fb8bef0af19c43', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://research.checkpoint.com/2020/game-on-finding-vulnerabilities-in-valves-steam-sockets/', 'name': 'https://research.checkpoint.com/2020/game-on-finding-vulnerabilities-in-valves-steam-sockets/', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-787'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:valvesoftware:game_networking_sockets:*:*:*:*:*:*:*:*', 'versionEndExcluding': '1.2.0', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "Valve's Game Networking Sockets prior to version v1.2.0 improperly handles long unreliable segments in function SNP_ReceiveUnreliableSegment() when configured to support plain-text messages, leading to a Heap-Based Buffer Overflow and resulting in a memory corruption and possibly even a remote code execution."}] | 2022-04-12T16:19Z | 2020-12-03T14:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Fletcher Dunn | 2020-09-03 11:01:54-07:00 | Drop unreliable segments with weird offset/size.
And be more deliberate about limits of unreliable message/segment sizes. | e0c86dcb9139771db3db0cfdb1fb8bef0af19c43 | False | ValveSoftware/GameNetworkingSockets | Reliable & unreliable messages over UDP. Robust message fragmentation & reassembly. P2P networking / NAT traversal. Encryption. | 2018-03-21 18:43:20 | 2022-08-12 17:06:45 | ValveSoftware | 6182.0 | 458.0 | SteamNetworkingSocketsLib::CSteamNetworkConnectionBase::ProcessPlainTextDataChunk | SteamNetworkingSocketsLib::CSteamNetworkConnectionBase::ProcessPlainTextDataChunk( int usecTimeSinceLast , RecvPacketContext_t & ctx) | ['usecTimeSinceLast', 'ctx'] | bool CSteamNetworkConnectionBase::ProcessPlainTextDataChunk( int usecTimeSinceLast, RecvPacketContext_t &ctx )
{
#define DECODE_ERROR( ... ) do { \
ConnectionState_ProblemDetectedLocally( k_ESteamNetConnectionEnd_Misc_InternalError, __VA_ARGS__ ); \
return false; } while(false)
#define EXPECT_BYTES(n,pszWhatFor) \
do { \
if ( pDecode + (n) > pEnd ) \
DECODE_ERROR( "SNP decode overrun, %d bytes for %s", (n), pszWhatFor ); \
} while (false)
#define READ_8BITU( var, pszWhatFor ) \
do { EXPECT_BYTES(1,pszWhatFor); var = *(uint8 *)pDecode; pDecode += 1; } while(false)
#define READ_16BITU( var, pszWhatFor ) \
do { EXPECT_BYTES(2,pszWhatFor); var = LittleWord(*(uint16 *)pDecode); pDecode += 2; } while(false)
#define READ_24BITU( var, pszWhatFor ) \
do { EXPECT_BYTES(3,pszWhatFor); \
var = *(uint8 *)pDecode; pDecode += 1; \
var |= uint32( LittleWord(*(uint16 *)pDecode) ) << 8U; pDecode += 2; \
} while(false)
#define READ_32BITU( var, pszWhatFor ) \
do { EXPECT_BYTES(4,pszWhatFor); var = LittleDWord(*(uint32 *)pDecode); pDecode += 4; } while(false)
#define READ_48BITU( var, pszWhatFor ) \
do { EXPECT_BYTES(6,pszWhatFor); \
var = LittleWord( *(uint16 *)pDecode ); pDecode += 2; \
var |= uint64( LittleDWord(*(uint32 *)pDecode) ) << 16U; pDecode += 4; \
} while(false)
#define READ_64BITU( var, pszWhatFor ) \
do { EXPECT_BYTES(8,pszWhatFor); var = LittleQWord(*(uint64 *)pDecode); pDecode += 8; } while(false)
#define READ_VARINT( var, pszWhatFor ) \
do { pDecode = DeserializeVarInt( pDecode, pEnd, var ); if ( !pDecode ) { DECODE_ERROR( "SNP data chunk decode overflow, varint for %s", pszWhatFor ); } } while(false)
#define READ_SEGMENT_DATA_SIZE( is_reliable ) \
int cbSegmentSize; \
{ \
int sizeFlags = nFrameType & 7; \
if ( sizeFlags <= 4 ) \
{ \
uint8 lowerSizeBits; \
READ_8BITU( lowerSizeBits, #is_reliable " size lower bits" ); \
cbSegmentSize = (sizeFlags<<8) + lowerSizeBits; \
if ( pDecode + cbSegmentSize > pEnd ) \
{ \
DECODE_ERROR( "SNP decode overrun %d bytes for %s segment data.", cbSegmentSize, #is_reliable ); \
} \
} \
else if ( sizeFlags == 7 ) \
{ \
cbSegmentSize = pEnd - pDecode; \
} \
else \
{ \
DECODE_ERROR( "Invalid SNP frame lead byte 0x%02x. (size bits)", nFrameType ); \
} \
} \
const uint8 *pSegmentData = pDecode; \
pDecode += cbSegmentSize;
// Make sure we have initialized the connection
Assert( BStateIsActive() );
const SteamNetworkingMicroseconds usecNow = ctx.m_usecNow;
const int64 nPktNum = ctx.m_nPktNum;
bool bInhibitMarkReceived = false;
const int nLogLevelPacketDecode = m_connectionConfig.m_LogLevel_PacketDecode.Get();
SpewVerboseGroup( nLogLevelPacketDecode, "[%s] decode pkt %lld\n", GetDescription(), (long long)nPktNum );
// Decode frames until we get to the end of the payload
const byte *pDecode = (const byte *)ctx.m_pPlainText;
const byte *pEnd = pDecode + ctx.m_cbPlainText;
int64 nCurMsgNum = 0;
int64 nDecodeReliablePos = 0;
while ( pDecode < pEnd )
{
uint8 nFrameType = *pDecode;
++pDecode;
if ( ( nFrameType & 0xc0 ) == 0x00 )
{
//
// Unreliable segment
//
// Decode message number
if ( nCurMsgNum == 0 )
{
// First unreliable frame. Message number is absolute, but only bottom N bits are sent
static const char szUnreliableMsgNumOffset[] = "unreliable msgnum";
int64 nLowerBits, nMask;
if ( nFrameType & 0x10 )
{
READ_32BITU( nLowerBits, szUnreliableMsgNumOffset );
nMask = 0xffffffff;
nCurMsgNum = NearestWithSameLowerBits( (int32)nLowerBits, m_receiverState.m_nHighestSeenMsgNum );
}
else
{
READ_16BITU( nLowerBits, szUnreliableMsgNumOffset );
nMask = 0xffff;
nCurMsgNum = NearestWithSameLowerBits( (int16)nLowerBits, m_receiverState.m_nHighestSeenMsgNum );
}
Assert( ( nCurMsgNum & nMask ) == nLowerBits );
if ( nCurMsgNum <= 0 )
{
DECODE_ERROR( "SNP decode unreliable msgnum underflow. %llx mod %llx, highest seen %llx",
(unsigned long long)nLowerBits, (unsigned long long)( nMask+1 ), (unsigned long long)m_receiverState.m_nHighestSeenMsgNum );
}
if ( std::abs( nCurMsgNum - m_receiverState.m_nHighestSeenMsgNum ) > (nMask>>2) )
{
// We really should never get close to this boundary.
SpewWarningRateLimited( usecNow, "Sender sent abs unreliable message number using %llx mod %llx, highest seen %llx\n",
(unsigned long long)nLowerBits, (unsigned long long)( nMask+1 ), (unsigned long long)m_receiverState.m_nHighestSeenMsgNum );
}
}
else
{
if ( nFrameType & 0x10 )
{
uint64 nMsgNumOffset;
READ_VARINT( nMsgNumOffset, "unreliable msgnum offset" );
nCurMsgNum += nMsgNumOffset;
}
else
{
++nCurMsgNum;
}
}
if ( nCurMsgNum > m_receiverState.m_nHighestSeenMsgNum )
m_receiverState.m_nHighestSeenMsgNum = nCurMsgNum;
//
// Decode segment offset in message
//
uint32 nOffset = 0;
if ( nFrameType & 0x08 )
READ_VARINT( nOffset, "unreliable data offset" );
//
// Decode size, locate segment data
//
READ_SEGMENT_DATA_SIZE( unreliable )
Assert( cbSegmentSize > 0 ); // !TEST! Bogus assert, zero byte messages are OK. Remove after testing
// Receive the segment
bool bLastSegmentInMessage = ( nFrameType & 0x20 ) != 0;
SNP_ReceiveUnreliableSegment( nCurMsgNum, nOffset, pSegmentData, cbSegmentSize, bLastSegmentInMessage, usecNow );
}
else if ( ( nFrameType & 0xe0 ) == 0x40 )
{
//
// Reliable segment
//
// First reliable segment?
if ( nDecodeReliablePos == 0 )
{
// Stream position is absolute. How many bits?
static const char szFirstReliableStreamPos[] = "first reliable streampos";
int64 nOffset, nMask;
switch ( nFrameType & (3<<3) )
{
case 0<<3: READ_24BITU( nOffset, szFirstReliableStreamPos ); nMask = (1ll<<24)-1; break;
case 1<<3: READ_32BITU( nOffset, szFirstReliableStreamPos ); nMask = (1ll<<32)-1; break;
case 2<<3: READ_48BITU( nOffset, szFirstReliableStreamPos ); nMask = (1ll<<48)-1; break;
default: DECODE_ERROR( "Reserved reliable stream pos size" );
}
// What do we expect to receive next?
int64 nExpectNextStreamPos = m_receiverState.m_nReliableStreamPos + len( m_receiverState.m_bufReliableStream );
// Find the stream offset closest to that
nDecodeReliablePos = ( nExpectNextStreamPos & ~nMask ) + nOffset;
if ( nDecodeReliablePos + (nMask>>1) < nExpectNextStreamPos )
{
nDecodeReliablePos += nMask+1;
Assert( ( nDecodeReliablePos & nMask ) == nOffset );
Assert( nExpectNextStreamPos < nDecodeReliablePos );
Assert( nExpectNextStreamPos + (nMask>>1) >= nDecodeReliablePos );
}
if ( nDecodeReliablePos <= 0 )
{
DECODE_ERROR( "SNP decode first reliable stream pos underflow. %llx mod %llx, expected next %llx",
(unsigned long long)nOffset, (unsigned long long)( nMask+1 ), (unsigned long long)nExpectNextStreamPos );
}
if ( std::abs( nDecodeReliablePos - nExpectNextStreamPos ) > (nMask>>2) )
{
// We really should never get close to this boundary.
SpewWarningRateLimited( usecNow, "Sender sent reliable stream pos using %llx mod %llx, expected next %llx\n",
(unsigned long long)nOffset, (unsigned long long)( nMask+1 ), (unsigned long long)nExpectNextStreamPos );
}
}
else
{
// Subsequent reliable message encode the position as an offset from previous.
static const char szOtherReliableStreamPos[] = "reliable streampos offset";
int64 nOffset;
switch ( nFrameType & (3<<3) )
{
case 0<<3: nOffset = 0; break;
case 1<<3: READ_8BITU( nOffset, szOtherReliableStreamPos ); break;
case 2<<3: READ_16BITU( nOffset, szOtherReliableStreamPos ); break;
default: READ_32BITU( nOffset, szOtherReliableStreamPos ); break;
}
nDecodeReliablePos += nOffset;
}
//
// Decode size, locate segment data
//
READ_SEGMENT_DATA_SIZE( reliable )
// Ingest the segment.
if ( !SNP_ReceiveReliableSegment( nPktNum, nDecodeReliablePos, pSegmentData, cbSegmentSize, usecNow ) )
{
if ( !BStateIsActive() )
return false; // we decided to nuke the connection - abort packet processing
// We're not able to ingest this reliable segment at the moment,
// but we didn't terminate the connection. So do not ack this packet
// to the peer. We need them to retransmit
bInhibitMarkReceived = true;
}
// Advance pointer for the next reliable segment, if any.
nDecodeReliablePos += cbSegmentSize;
// Decoding rules state that if we have established a message number,
// (from an earlier unreliable message), then we advance it.
if ( nCurMsgNum > 0 )
++nCurMsgNum;
}
else if ( ( nFrameType & 0xfc ) == 0x80 )
{
//
// Stop waiting
//
int64 nOffset = 0;
static const char szStopWaitingOffset[] = "stop_waiting offset";
switch ( nFrameType & 3 )
{
case 0: READ_8BITU( nOffset, szStopWaitingOffset ); break;
case 1: READ_16BITU( nOffset, szStopWaitingOffset ); break;
case 2: READ_24BITU( nOffset, szStopWaitingOffset ); break;
case 3: READ_64BITU( nOffset, szStopWaitingOffset ); break;
}
if ( nOffset >= nPktNum )
{
DECODE_ERROR( "stop_waiting pktNum %llu offset %llu", nPktNum, nOffset );
}
++nOffset;
int64 nMinPktNumToSendAcks = nPktNum-nOffset;
if ( nMinPktNumToSendAcks == m_receiverState.m_nMinPktNumToSendAcks )
continue;
if ( nMinPktNumToSendAcks < m_receiverState.m_nMinPktNumToSendAcks )
{
// Sender must never reduce this number! Check for bugs or bogus sender
if ( nPktNum >= m_receiverState.m_nPktNumUpdatedMinPktNumToSendAcks )
{
DECODE_ERROR( "SNP stop waiting reduced %lld (pkt %lld) -> %lld (pkt %lld)",
(long long)m_receiverState.m_nMinPktNumToSendAcks,
(long long)m_receiverState.m_nPktNumUpdatedMinPktNumToSendAcks,
(long long)nMinPktNumToSendAcks,
(long long)nPktNum
);
}
continue;
}
SpewDebugGroup( nLogLevelPacketDecode, "[%s] decode pkt %lld stop waiting: %lld (was %lld)",
GetDescription(),
(long long)nPktNum,
(long long)nMinPktNumToSendAcks, (long long)m_receiverState.m_nMinPktNumToSendAcks );
m_receiverState.m_nMinPktNumToSendAcks = nMinPktNumToSendAcks;
m_receiverState.m_nPktNumUpdatedMinPktNumToSendAcks = nPktNum;
// Trim from the front of the packet gap list,
// we can stop reporting these losses to the sender
auto h = m_receiverState.m_mapPacketGaps.begin();
while ( h->first <= m_receiverState.m_nMinPktNumToSendAcks )
{
if ( h->second.m_nEnd > m_receiverState.m_nMinPktNumToSendAcks )
{
// Ug. You're not supposed to modify the key in a map.
// I suppose that's legit, since you could violate the ordering.
// but in this case I know that this change is OK.
const_cast<int64 &>( h->first ) = m_receiverState.m_nMinPktNumToSendAcks;
break;
}
// Were we pending an ack on this?
if ( m_receiverState.m_itPendingAck == h )
++m_receiverState.m_itPendingAck;
// Were we pending a nack on this?
if ( m_receiverState.m_itPendingNack == h )
{
// I am not sure this is even possible.
AssertMsg( false, "Expiring packet gap, which had pending NACK" );
// But just in case, this would be the proper action
++m_receiverState.m_itPendingNack;
}
// Packet loss is in the past. Forget about it and move on
h = m_receiverState.m_mapPacketGaps.erase(h);
}
}
else if ( ( nFrameType & 0xf0 ) == 0x90 )
{
//
// Ack
//
#if STEAMNETWORKINGSOCKETS_SNP_PARANOIA > 0
m_senderState.DebugCheckInFlightPacketMap();
#if STEAMNETWORKINGSOCKETS_SNP_PARANOIA == 1
if ( ( nPktNum & 255 ) == 0 ) // only do it periodically
#endif
{
m_senderState.DebugCheckInFlightPacketMap();
}
#endif
// Parse latest received sequence number
int64 nLatestRecvSeqNum;
{
static const char szAckLatestPktNum[] = "ack latest pktnum";
int64 nLowerBits, nMask;
if ( nFrameType & 0x40 )
{
READ_32BITU( nLowerBits, szAckLatestPktNum );
nMask = 0xffffffff;
nLatestRecvSeqNum = NearestWithSameLowerBits( (int32)nLowerBits, m_statsEndToEnd.m_nNextSendSequenceNumber );
}
else
{
READ_16BITU( nLowerBits, szAckLatestPktNum );
nMask = 0xffff;
nLatestRecvSeqNum = NearestWithSameLowerBits( (int16)nLowerBits, m_statsEndToEnd.m_nNextSendSequenceNumber );
}
Assert( ( nLatestRecvSeqNum & nMask ) == nLowerBits );
// Find the message number that is closes to
if ( nLatestRecvSeqNum < 0 )
{
DECODE_ERROR( "SNP decode ack latest pktnum underflow. %llx mod %llx, next send %llx",
(unsigned long long)nLowerBits, (unsigned long long)( nMask+1 ), (unsigned long long)m_statsEndToEnd.m_nNextSendSequenceNumber );
}
if ( std::abs( nLatestRecvSeqNum - m_statsEndToEnd.m_nNextSendSequenceNumber ) > (nMask>>2) )
{
// We really should never get close to this boundary.
SpewWarningRateLimited( usecNow, "Sender sent abs latest recv pkt number using %llx mod %llx, next send %llx\n",
(unsigned long long)nLowerBits, (unsigned long long)( nMask+1 ), (unsigned long long)m_statsEndToEnd.m_nNextSendSequenceNumber );
}
if ( nLatestRecvSeqNum >= m_statsEndToEnd.m_nNextSendSequenceNumber )
{
DECODE_ERROR( "SNP decode ack latest pktnum %lld (%llx mod %llx), but next outoing packet is %lld (%llx).",
(long long)nLatestRecvSeqNum, (unsigned long long)nLowerBits, (unsigned long long)( nMask+1 ),
(long long)m_statsEndToEnd.m_nNextSendSequenceNumber, (unsigned long long)m_statsEndToEnd.m_nNextSendSequenceNumber
);
}
}
SpewDebugGroup( nLogLevelPacketDecode, "[%s] decode pkt %lld latest recv %lld\n",
GetDescription(),
(long long)nPktNum, (long long)nLatestRecvSeqNum
);
// Locate our bookkeeping for this packet, or the latest one before it
// Remember, we have a sentinel with a low, invalid packet number
Assert( !m_senderState.m_mapInFlightPacketsByPktNum.empty() );
auto inFlightPkt = m_senderState.m_mapInFlightPacketsByPktNum.upper_bound( nLatestRecvSeqNum );
--inFlightPkt;
Assert( inFlightPkt->first <= nLatestRecvSeqNum );
// Parse out delay, and process the ping
{
uint16 nPackedDelay;
READ_16BITU( nPackedDelay, "ack delay" );
if ( nPackedDelay != 0xffff && inFlightPkt->first == nLatestRecvSeqNum && inFlightPkt->second.m_pTransport == ctx.m_pTransport )
{
SteamNetworkingMicroseconds usecDelay = SteamNetworkingMicroseconds( nPackedDelay ) << k_nAckDelayPrecisionShift;
SteamNetworkingMicroseconds usecElapsed = usecNow - inFlightPkt->second.m_usecWhenSent;
Assert( usecElapsed >= 0 );
// Account for their reported delay, and calculate ping, in MS
int msPing = ( usecElapsed - usecDelay ) / 1000;
// Does this seem bogus? (We allow a small amount of slop.)
// NOTE: A malicious sender could lie about this delay, tricking us
// into thinking that the real network latency is low, they are just
// delaying their replies. This actually matters, since the ping time
// is an input into the rate calculation. So we might need to
// occasionally send pings that require an immediately reply, and
// if those ping times seem way out of whack with the ones where they are
// allowed to send a delay, take action against them.
if ( msPing < -1 || msPing > 2000 )
{
// Either they are lying or some weird timer stuff is happening.
// Either way, discard it.
SpewMsgGroup( m_connectionConfig.m_LogLevel_AckRTT.Get(), "[%s] decode pkt %lld latest recv %lld delay %lluusec INVALID ping %lldusec\n",
GetDescription(),
(long long)nPktNum, (long long)nLatestRecvSeqNum,
(unsigned long long)usecDelay,
(long long)usecElapsed
);
}
else
{
// Clamp, if we have slop
if ( msPing < 0 )
msPing = 0;
ProcessSNPPing( msPing, ctx );
// Spew
SpewVerboseGroup( m_connectionConfig.m_LogLevel_AckRTT.Get(), "[%s] decode pkt %lld latest recv %lld delay %.1fms elapsed %.1fms ping %dms\n",
GetDescription(),
(long long)nPktNum, (long long)nLatestRecvSeqNum,
(float)(usecDelay * 1e-3 ),
(float)(usecElapsed * 1e-3 ),
msPing
);
}
}
}
// Parse number of blocks
int nBlocks = nFrameType&7;
if ( nBlocks == 7 )
READ_8BITU( nBlocks, "ack num blocks" );
// If they actually sent us any blocks, that means they are fragmented.
// We should make sure and tell them to stop sending us these nacks
// and move forward.
if ( nBlocks > 0 )
{
// Decrease flush delay the more blocks they send us.
// FIXME - This is not an optimal way to do this. Forcing us to
// ack everything is not what we want to do. Instead, we should
// use a separate timer for when we need to flush out a stop_waiting
// packet!
SteamNetworkingMicroseconds usecDelay = 250*1000 / nBlocks;
QueueFlushAllAcks( usecNow + usecDelay );
}
// Process ack blocks, working backwards from the latest received sequence number.
// Note that we have to parse all this stuff out, even if it's old news (packets older
// than the stop_aiting value we sent), because we need to do that to get to the rest
// of the packet.
bool bAckedReliableRange = false;
int64 nPktNumAckEnd = nLatestRecvSeqNum+1;
while ( nBlocks >= 0 )
{
// Parse out number of acks/nacks.
// Have we parsed all the real blocks?
int64 nPktNumAckBegin, nPktNumNackBegin;
if ( nBlocks == 0 )
{
// Implicit block. Everything earlier between the last
// NACK and the stop_waiting value is implicitly acked!
if ( nPktNumAckEnd <= m_senderState.m_nMinPktWaitingOnAck )
break;
nPktNumAckBegin = m_senderState.m_nMinPktWaitingOnAck;
nPktNumNackBegin = nPktNumAckBegin;
SpewDebugGroup( nLogLevelPacketDecode, "[%s] decode pkt %lld ack last block ack begin %lld\n",
GetDescription(),
(long long)nPktNum, (long long)nPktNumAckBegin );
}
else
{
uint8 nBlockHeader;
READ_8BITU( nBlockHeader, "ack block header" );
// Ack count?
int64 numAcks = ( nBlockHeader>> 4 ) & 7;
if ( nBlockHeader & 0x80 )
{
uint64 nUpperBits;
READ_VARINT( nUpperBits, "ack count upper bits" );
if ( nUpperBits > 100000 )
DECODE_ERROR( "Ack count of %llu<<3 is crazy", (unsigned long long)nUpperBits );
numAcks |= nUpperBits<<3;
}
nPktNumAckBegin = nPktNumAckEnd - numAcks;
if ( nPktNumAckBegin < 0 )
DECODE_ERROR( "Ack range underflow, end=%lld, num=%lld", (long long)nPktNumAckEnd, (long long)numAcks );
// Extended nack count?
int64 numNacks = nBlockHeader & 7;
if ( nBlockHeader & 0x08)
{
uint64 nUpperBits;
READ_VARINT( nUpperBits, "nack count upper bits" );
if ( nUpperBits > 100000 )
DECODE_ERROR( "Nack count of %llu<<3 is crazy", nUpperBits );
numNacks |= nUpperBits<<3;
}
nPktNumNackBegin = nPktNumAckBegin - numNacks;
if ( nPktNumNackBegin < 0 )
DECODE_ERROR( "Nack range underflow, end=%lld, num=%lld", (long long)nPktNumAckBegin, (long long)numAcks );
SpewDebugGroup( nLogLevelPacketDecode, "[%s] decode pkt %lld nack [%lld,%lld) ack [%lld,%lld)\n",
GetDescription(),
(long long)nPktNum,
(long long)nPktNumNackBegin, (long long)( nPktNumNackBegin + numNacks ),
(long long)nPktNumAckBegin, (long long)( nPktNumAckBegin + numAcks )
);
}
// Process acks first.
Assert( nPktNumAckBegin >= 0 );
while ( inFlightPkt->first >= nPktNumAckBegin )
{
Assert( inFlightPkt->first < nPktNumAckEnd );
// Scan reliable segments, and see if any are marked for retry or are in flight
for ( const SNPRange_t &relRange: inFlightPkt->second.m_vecReliableSegments )
{
// If range is present, it should be in only one of these two tables.
if ( m_senderState.m_listInFlightReliableRange.erase( relRange ) == 0 )
{
if ( m_senderState.m_listReadyRetryReliableRange.erase( relRange ) > 0 )
{
// When we put stuff into the reliable retry list, we mark it as pending again.
// But now it's acked, so it's no longer pending, even though we didn't send it.
m_senderState.m_cbPendingReliable -= int( relRange.length() );
Assert( m_senderState.m_cbPendingReliable >= 0 );
bAckedReliableRange = true;
}
}
else
{
bAckedReliableRange = true;
Assert( m_senderState.m_listReadyRetryReliableRange.count( relRange ) == 0 );
}
}
// Check if this was the next packet we were going to timeout, then advance
// pointer. This guy didn't timeout.
if ( inFlightPkt == m_senderState.m_itNextInFlightPacketToTimeout )
++m_senderState.m_itNextInFlightPacketToTimeout;
// No need to track this anymore, remove from our table
inFlightPkt = m_senderState.m_mapInFlightPacketsByPktNum.erase( inFlightPkt );
--inFlightPkt;
m_senderState.MaybeCheckInFlightPacketMap();
}
// Ack of in-flight end-to-end stats?
if ( nPktNumAckBegin <= m_statsEndToEnd.m_pktNumInFlight && m_statsEndToEnd.m_pktNumInFlight < nPktNumAckEnd )
m_statsEndToEnd.InFlightPktAck( usecNow );
// Process nacks.
Assert( nPktNumNackBegin >= 0 );
while ( inFlightPkt->first >= nPktNumNackBegin )
{
Assert( inFlightPkt->first < nPktNumAckEnd );
SNP_SenderProcessPacketNack( inFlightPkt->first, inFlightPkt->second, "NACK" );
// We'll keep the record on hand, though, in case an ACK comes in
--inFlightPkt;
}
// Continue on to the the next older block
nPktNumAckEnd = nPktNumNackBegin;
--nBlocks;
}
// Should we check for discarding reliable messages we are keeping around in case
// of retransmission, since we know now that they were delivered?
if ( bAckedReliableRange )
{
m_senderState.RemoveAckedReliableMessageFromUnackedList();
// Spew where we think the peer is decoding the reliable stream
if ( nLogLevelPacketDecode >= k_ESteamNetworkingSocketsDebugOutputType_Debug )
{
int64 nPeerReliablePos = m_senderState.m_nReliableStreamPos;
if ( !m_senderState.m_listInFlightReliableRange.empty() )
nPeerReliablePos = std::min( nPeerReliablePos, m_senderState.m_listInFlightReliableRange.begin()->first.m_nBegin );
if ( !m_senderState.m_listReadyRetryReliableRange.empty() )
nPeerReliablePos = std::min( nPeerReliablePos, m_senderState.m_listReadyRetryReliableRange.begin()->first.m_nBegin );
SpewDebugGroup( nLogLevelPacketDecode, "[%s] decode pkt %lld peer reliable pos = %lld\n",
GetDescription(),
(long long)nPktNum, (long long)nPeerReliablePos );
}
}
// Check if any of this was new info, then advance our stop_waiting value.
if ( nLatestRecvSeqNum > m_senderState.m_nMinPktWaitingOnAck )
{
SpewVerboseGroup( nLogLevelPacketDecode, "[%s] updating min_waiting_on_ack %lld -> %lld\n",
GetDescription(),
(long long)m_senderState.m_nMinPktWaitingOnAck, (long long)nLatestRecvSeqNum );
m_senderState.m_nMinPktWaitingOnAck = nLatestRecvSeqNum;
}
}
else
{
DECODE_ERROR( "Invalid SNP frame lead byte 0x%02x", nFrameType );
}
}
// Should we record that we received it?
if ( bInhibitMarkReceived )
{
// Something really odd. High packet loss / fragmentation.
// Potentially the peer is being abusive and we need
// to protect ourselves.
//
// Act as if the packet was dropped. This will cause the
// peer's sender logic to interpret this as additional packet
// loss and back off. That's a feature, not a bug.
}
else
{
// Update structures needed to populate our ACKs.
// If we received reliable data now, then schedule an ack
bool bScheduleAck = nDecodeReliablePos > 0;
SNP_RecordReceivedPktNum( nPktNum, usecNow, bScheduleAck );
}
// Track end-to-end flow. Even if we decided to tell our peer that
// we did not receive this, we want our own stats to reflect
// that we did. (And we want to be able to quickly reject a
// packet with this same number.)
//
// Also, note that order of operations is important. This call must
// happen after the SNP_RecordReceivedPktNum call above
m_statsEndToEnd.TrackProcessSequencedPacket( nPktNum, usecNow, usecTimeSinceLast );
// Packet can be processed further
return true;
// Make sure these don't get used beyond where we intended them to get used
#undef DECODE_ERROR
#undef EXPECT_BYTES
#undef READ_8BITU
#undef READ_16BITU
#undef READ_24BITU
#undef READ_32BITU
#undef READ_64BITU
#undef READ_VARINT
#undef READ_SEGMENT_DATA_SIZE
} | 2484 | True | 1 |
|
CVE-2020-6016 | False | False | False | False | AV:N/AC:L/Au:N/C:C/I:C/A:C | NETWORK | LOW | NONE | COMPLETE | COMPLETE | COMPLETE | 10.0 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/ValveSoftware/GameNetworkingSockets/commit/e0c86dcb9139771db3db0cfdb1fb8bef0af19c43', 'name': 'https://github.com/ValveSoftware/GameNetworkingSockets/commit/e0c86dcb9139771db3db0cfdb1fb8bef0af19c43', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://research.checkpoint.com/2020/game-on-finding-vulnerabilities-in-valves-steam-sockets/', 'name': 'https://research.checkpoint.com/2020/game-on-finding-vulnerabilities-in-valves-steam-sockets/', 'refsource': 'MISC', 'tags': []}] | [{'description': [{'lang': 'en', 'value': 'CWE-787'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:valvesoftware:game_networking_sockets:*:*:*:*:*:*:*:*', 'versionEndExcluding': '1.2.0', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "Valve's Game Networking Sockets prior to version v1.2.0 improperly handles unreliable segments with negative offsets in function SNP_ReceiveUnreliableSegment(), leading to a Heap-Based Buffer Underflow and a free() of memory not from the heap, resulting in a memory corruption and probably even a remote code execution."}] | 2020-12-10T23:15Z | 2020-11-18T15:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Fletcher Dunn | 2020-09-03 11:01:54-07:00 | Drop unreliable segments with weird offset/size.
And be more deliberate about limits of unreliable message/segment sizes. | e0c86dcb9139771db3db0cfdb1fb8bef0af19c43 | False | ValveSoftware/GameNetworkingSockets | Reliable & unreliable messages over UDP. Robust message fragmentation & reassembly. P2P networking / NAT traversal. Encryption. | 2018-03-21 18:43:20 | 2022-08-12 17:06:45 | ValveSoftware | 6182.0 | 458.0 | SteamNetworkingSocketsLib::CSteamNetworkConnectionBase::SNP_SendMessage | SteamNetworkingSocketsLib::CSteamNetworkConnectionBase::SNP_SendMessage( CSteamNetworkingMessage * pSendMessage , SteamNetworkingMicroseconds usecNow , bool * pbThinkImmediately) | ['pSendMessage', 'usecNow', 'pbThinkImmediately'] | int64 CSteamNetworkConnectionBase::SNP_SendMessage( CSteamNetworkingMessage *pSendMessage, SteamNetworkingMicroseconds usecNow, bool *pbThinkImmediately )
{
int cbData = (int)pSendMessage->m_cbSize;
// Assume we won't want to wake up immediately
if ( pbThinkImmediately )
*pbThinkImmediately = false;
// Check if we're full
if ( m_senderState.PendingBytesTotal() + cbData > m_connectionConfig.m_SendBufferSize.Get() )
{
SpewWarningRateLimited( usecNow, "Connection already has %u bytes pending, cannot queue any more messages\n", m_senderState.PendingBytesTotal() );
pSendMessage->Release();
return -k_EResultLimitExceeded;
}
// Check if they try to send a really large message
if ( cbData > k_cbMaxUnreliableMsgSize && !( pSendMessage->m_nFlags & k_nSteamNetworkingSend_Reliable ) )
{
SpewWarningRateLimited( usecNow, "Trying to send a very large (%d bytes) unreliable message. Sending as reliable instead.\n", cbData );
pSendMessage->m_nFlags |= k_nSteamNetworkingSend_Reliable;
}
if ( pSendMessage->m_nFlags & k_nSteamNetworkingSend_NoDelay )
{
// FIXME - need to check how much data is currently pending, and return
// k_EResultIgnored if we think it's going to be a while before this
// packet goes on the wire.
}
// First, accumulate tokens, and also limit to reasonable burst
// if we weren't already waiting to send
SNP_ClampSendRate();
SNP_TokenBucket_Accumulate( usecNow );
// Assign a message number
pSendMessage->m_nMessageNumber = ++m_senderState.m_nLastSentMsgNum;
// Reliable, or unreliable?
if ( pSendMessage->m_nFlags & k_nSteamNetworkingSend_Reliable )
{
pSendMessage->SNPSend_SetReliableStreamPos( m_senderState.m_nReliableStreamPos );
// Generate the header
byte *hdr = pSendMessage->SNPSend_ReliableHeader();
hdr[0] = 0;
byte *hdrEnd = hdr+1;
int64 nMsgNumGap = pSendMessage->m_nMessageNumber - m_senderState.m_nLastSendMsgNumReliable;
Assert( nMsgNumGap >= 1 );
if ( nMsgNumGap > 1 )
{
hdrEnd = SerializeVarInt( hdrEnd, (uint64)nMsgNumGap );
hdr[0] |= 0x40;
}
if ( cbData < 0x20 )
{
hdr[0] |= (byte)cbData;
}
else
{
hdr[0] |= (byte)( 0x20 | ( cbData & 0x1f ) );
hdrEnd = SerializeVarInt( hdrEnd, cbData>>5U );
}
pSendMessage->m_cbSNPSendReliableHeader = hdrEnd - hdr;
// Grow the total size of the message by the header
pSendMessage->m_cbSize += pSendMessage->m_cbSNPSendReliableHeader;
// Advance stream pointer
m_senderState.m_nReliableStreamPos += pSendMessage->m_cbSize;
// Update stats
++m_senderState.m_nMessagesSentReliable;
m_senderState.m_cbPendingReliable += pSendMessage->m_cbSize;
// Remember last sent reliable message number, so we can know how to
// encode the next one
m_senderState.m_nLastSendMsgNumReliable = pSendMessage->m_nMessageNumber;
Assert( pSendMessage->SNPSend_IsReliable() );
}
else
{
pSendMessage->SNPSend_SetReliableStreamPos( 0 );
pSendMessage->m_cbSNPSendReliableHeader = 0;
++m_senderState.m_nMessagesSentUnreliable;
m_senderState.m_cbPendingUnreliable += pSendMessage->m_cbSize;
Assert( !pSendMessage->SNPSend_IsReliable() );
}
// Add to pending list
m_senderState.m_messagesQueued.push_back( pSendMessage );
SpewVerboseGroup( m_connectionConfig.m_LogLevel_Message.Get(), "[%s] SendMessage %s: MsgNum=%lld sz=%d\n",
GetDescription(),
pSendMessage->SNPSend_IsReliable() ? "RELIABLE" : "UNRELIABLE",
(long long)pSendMessage->m_nMessageNumber,
pSendMessage->m_cbSize );
// Use Nagle?
// We always set the Nagle timer, even if we immediately clear it. This makes our clearing code simpler,
// since we can always safely assume that once we find a message with the nagle timer cleared, all messages
// queued earlier than this also have it cleared.
// FIXME - Don't think this works if the configuration value is changing. Since changing the
// config value could violate the assumption that nagle times are increasing. Probably not worth
// fixing.
pSendMessage->SNPSend_SetUsecNagle( usecNow + m_connectionConfig.m_NagleTime.Get() );
if ( pSendMessage->m_nFlags & k_nSteamNetworkingSend_NoNagle )
m_senderState.ClearNagleTimers();
// Save the message number. The code below might end up deleting the message we just queued
int64 result = pSendMessage->m_nMessageNumber;
// Schedule wakeup at the appropriate time. (E.g. right now, if we're ready to send,
// or at the Nagle time, if Nagle is active.)
//
// NOTE: Right now we might not actually be capable of sending end to end data.
// But that case is relatievly rare, and nothing will break if we try to right now.
// On the other hand, just asking the question involved a virtual function call,
// and it will return success most of the time, so let's not make the check here.
if ( GetState() == k_ESteamNetworkingConnectionState_Connected )
{
SteamNetworkingMicroseconds usecNextThink = SNP_GetNextThinkTime( usecNow );
// Ready to send now?
if ( usecNextThink > usecNow )
{
// We are rate limiting. Spew about it?
if ( m_senderState.m_messagesQueued.m_pFirst->SNPSend_UsecNagle() == 0 )
{
SpewVerbose( "[%s] RATELIM QueueTime is %.1fms, SendRate=%.1fk, BytesQueued=%d\n",
GetDescription(),
m_senderState.CalcTimeUntilNextSend() * 1e-3,
m_senderState.m_n_x * ( 1.0/1024.0),
m_senderState.PendingBytesTotal()
);
}
// Set a wakeup call.
EnsureMinThinkTime( usecNextThink );
}
else
{
// We're ready to send right now. Check if we should!
if ( pSendMessage->m_nFlags & k_nSteamNetworkingSend_UseCurrentThread )
{
// We should send in this thread, before the API entry point
// that the app used returns. Is the caller gonna handle this?
if ( pbThinkImmediately )
{
// Caller says they will handle it
*pbThinkImmediately = true;
}
else
{
// Caller wants us to just do it here.
CheckConnectionStateAndSetNextThinkTime( usecNow );
}
}
else
{
// Wake up the service thread ASAP to send this in the background thread
SetNextThinkTimeASAP();
}
}
}
return result;
} | 584 | True | 1 |
|
CVE-2020-6017 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/ValveSoftware/GameNetworkingSockets/commit/e0c86dcb9139771db3db0cfdb1fb8bef0af19c43', 'name': 'https://github.com/ValveSoftware/GameNetworkingSockets/commit/e0c86dcb9139771db3db0cfdb1fb8bef0af19c43', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://research.checkpoint.com/2020/game-on-finding-vulnerabilities-in-valves-steam-sockets/', 'name': 'https://research.checkpoint.com/2020/game-on-finding-vulnerabilities-in-valves-steam-sockets/', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-787'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:valvesoftware:game_networking_sockets:*:*:*:*:*:*:*:*', 'versionEndExcluding': '1.2.0', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "Valve's Game Networking Sockets prior to version v1.2.0 improperly handles long unreliable segments in function SNP_ReceiveUnreliableSegment() when configured to support plain-text messages, leading to a Heap-Based Buffer Overflow and resulting in a memory corruption and possibly even a remote code execution."}] | 2022-04-12T16:19Z | 2020-12-03T14:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Fletcher Dunn | 2020-09-03 11:01:54-07:00 | Drop unreliable segments with weird offset/size.
And be more deliberate about limits of unreliable message/segment sizes. | e0c86dcb9139771db3db0cfdb1fb8bef0af19c43 | False | ValveSoftware/GameNetworkingSockets | Reliable & unreliable messages over UDP. Robust message fragmentation & reassembly. P2P networking / NAT traversal. Encryption. | 2018-03-21 18:43:20 | 2022-08-12 17:06:45 | ValveSoftware | 6182.0 | 458.0 | SteamNetworkingSocketsLib::CSteamNetworkConnectionBase::SNP_SendMessage | SteamNetworkingSocketsLib::CSteamNetworkConnectionBase::SNP_SendMessage( CSteamNetworkingMessage * pSendMessage , SteamNetworkingMicroseconds usecNow , bool * pbThinkImmediately) | ['pSendMessage', 'usecNow', 'pbThinkImmediately'] | int64 CSteamNetworkConnectionBase::SNP_SendMessage( CSteamNetworkingMessage *pSendMessage, SteamNetworkingMicroseconds usecNow, bool *pbThinkImmediately )
{
int cbData = (int)pSendMessage->m_cbSize;
// Assume we won't want to wake up immediately
if ( pbThinkImmediately )
*pbThinkImmediately = false;
// Check if we're full
if ( m_senderState.PendingBytesTotal() + cbData > m_connectionConfig.m_SendBufferSize.Get() )
{
SpewWarningRateLimited( usecNow, "Connection already has %u bytes pending, cannot queue any more messages\n", m_senderState.PendingBytesTotal() );
pSendMessage->Release();
return -k_EResultLimitExceeded;
}
// Check if they try to send a really large message
if ( cbData > k_cbMaxUnreliableMsgSize && !( pSendMessage->m_nFlags & k_nSteamNetworkingSend_Reliable ) )
{
SpewWarningRateLimited( usecNow, "Trying to send a very large (%d bytes) unreliable message. Sending as reliable instead.\n", cbData );
pSendMessage->m_nFlags |= k_nSteamNetworkingSend_Reliable;
}
if ( pSendMessage->m_nFlags & k_nSteamNetworkingSend_NoDelay )
{
// FIXME - need to check how much data is currently pending, and return
// k_EResultIgnored if we think it's going to be a while before this
// packet goes on the wire.
}
// First, accumulate tokens, and also limit to reasonable burst
// if we weren't already waiting to send
SNP_ClampSendRate();
SNP_TokenBucket_Accumulate( usecNow );
// Assign a message number
pSendMessage->m_nMessageNumber = ++m_senderState.m_nLastSentMsgNum;
// Reliable, or unreliable?
if ( pSendMessage->m_nFlags & k_nSteamNetworkingSend_Reliable )
{
pSendMessage->SNPSend_SetReliableStreamPos( m_senderState.m_nReliableStreamPos );
// Generate the header
byte *hdr = pSendMessage->SNPSend_ReliableHeader();
hdr[0] = 0;
byte *hdrEnd = hdr+1;
int64 nMsgNumGap = pSendMessage->m_nMessageNumber - m_senderState.m_nLastSendMsgNumReliable;
Assert( nMsgNumGap >= 1 );
if ( nMsgNumGap > 1 )
{
hdrEnd = SerializeVarInt( hdrEnd, (uint64)nMsgNumGap );
hdr[0] |= 0x40;
}
if ( cbData < 0x20 )
{
hdr[0] |= (byte)cbData;
}
else
{
hdr[0] |= (byte)( 0x20 | ( cbData & 0x1f ) );
hdrEnd = SerializeVarInt( hdrEnd, cbData>>5U );
}
pSendMessage->m_cbSNPSendReliableHeader = hdrEnd - hdr;
// Grow the total size of the message by the header
pSendMessage->m_cbSize += pSendMessage->m_cbSNPSendReliableHeader;
// Advance stream pointer
m_senderState.m_nReliableStreamPos += pSendMessage->m_cbSize;
// Update stats
++m_senderState.m_nMessagesSentReliable;
m_senderState.m_cbPendingReliable += pSendMessage->m_cbSize;
// Remember last sent reliable message number, so we can know how to
// encode the next one
m_senderState.m_nLastSendMsgNumReliable = pSendMessage->m_nMessageNumber;
Assert( pSendMessage->SNPSend_IsReliable() );
}
else
{
pSendMessage->SNPSend_SetReliableStreamPos( 0 );
pSendMessage->m_cbSNPSendReliableHeader = 0;
++m_senderState.m_nMessagesSentUnreliable;
m_senderState.m_cbPendingUnreliable += pSendMessage->m_cbSize;
Assert( !pSendMessage->SNPSend_IsReliable() );
}
// Add to pending list
m_senderState.m_messagesQueued.push_back( pSendMessage );
SpewVerboseGroup( m_connectionConfig.m_LogLevel_Message.Get(), "[%s] SendMessage %s: MsgNum=%lld sz=%d\n",
GetDescription(),
pSendMessage->SNPSend_IsReliable() ? "RELIABLE" : "UNRELIABLE",
(long long)pSendMessage->m_nMessageNumber,
pSendMessage->m_cbSize );
// Use Nagle?
// We always set the Nagle timer, even if we immediately clear it. This makes our clearing code simpler,
// since we can always safely assume that once we find a message with the nagle timer cleared, all messages
// queued earlier than this also have it cleared.
// FIXME - Don't think this works if the configuration value is changing. Since changing the
// config value could violate the assumption that nagle times are increasing. Probably not worth
// fixing.
pSendMessage->SNPSend_SetUsecNagle( usecNow + m_connectionConfig.m_NagleTime.Get() );
if ( pSendMessage->m_nFlags & k_nSteamNetworkingSend_NoNagle )
m_senderState.ClearNagleTimers();
// Save the message number. The code below might end up deleting the message we just queued
int64 result = pSendMessage->m_nMessageNumber;
// Schedule wakeup at the appropriate time. (E.g. right now, if we're ready to send,
// or at the Nagle time, if Nagle is active.)
//
// NOTE: Right now we might not actually be capable of sending end to end data.
// But that case is relatievly rare, and nothing will break if we try to right now.
// On the other hand, just asking the question involved a virtual function call,
// and it will return success most of the time, so let's not make the check here.
if ( GetState() == k_ESteamNetworkingConnectionState_Connected )
{
SteamNetworkingMicroseconds usecNextThink = SNP_GetNextThinkTime( usecNow );
// Ready to send now?
if ( usecNextThink > usecNow )
{
// We are rate limiting. Spew about it?
if ( m_senderState.m_messagesQueued.m_pFirst->SNPSend_UsecNagle() == 0 )
{
SpewVerbose( "[%s] RATELIM QueueTime is %.1fms, SendRate=%.1fk, BytesQueued=%d\n",
GetDescription(),
m_senderState.CalcTimeUntilNextSend() * 1e-3,
m_senderState.m_n_x * ( 1.0/1024.0),
m_senderState.PendingBytesTotal()
);
}
// Set a wakeup call.
EnsureMinThinkTime( usecNextThink );
}
else
{
// We're ready to send right now. Check if we should!
if ( pSendMessage->m_nFlags & k_nSteamNetworkingSend_UseCurrentThread )
{
// We should send in this thread, before the API entry point
// that the app used returns. Is the caller gonna handle this?
if ( pbThinkImmediately )
{
// Caller says they will handle it
*pbThinkImmediately = true;
}
else
{
// Caller wants us to just do it here.
CheckConnectionStateAndSetNextThinkTime( usecNow );
}
}
else
{
// Wake up the service thread ASAP to send this in the background thread
SetNextThinkTimeASAP();
}
}
}
return result;
} | 584 | True | 1 |
|
CVE-2020-6019 | False | False | False | False | AV:N/AC:L/Au:N/C:N/I:N/A:P | NETWORK | LOW | NONE | NONE | NONE | PARTIAL | 5.0 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | NONE | NONE | HIGH | 7.5 | HIGH | 3.9 | 3.6 | False | [{'url': 'https://github.com/ValveSoftware/GameNetworkingSockets/commit/d944a10808891d202bb1d5e1998de6e0423af678', 'name': 'https://github.com/ValveSoftware/GameNetworkingSockets/commit/d944a10808891d202bb1d5e1998de6e0423af678', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://research.checkpoint.com/2020/game-on-finding-vulnerabilities-in-valves-steam-sockets/', 'name': 'https://research.checkpoint.com/2020/game-on-finding-vulnerabilities-in-valves-steam-sockets/', 'refsource': 'MISC', 'tags': []}] | [{'description': [{'lang': 'en', 'value': 'NVD-CWE-noinfo'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:valvesoftware:game_networking_sockets:*:*:*:*:*:*:*:*', 'versionEndExcluding': '1.2.0', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "Valve's Game Networking Sockets prior to version v1.2.0 improperly handles inlined statistics messages in function CConnectionTransportUDPBase::Received_Data(), leading to an exception thrown from libprotobuf and resulting in a crash."}] | 2020-12-10T23:15Z | 2020-11-13T16:15Z | Insufficient Information | There is insufficient information about the issue to classify it; details are unkown or unspecified. | Insufficient Information | https://nvd.nist.gov/vuln/categories | 0 | Fletcher Dunn | 2020-09-03 11:24:25-07:00 | Tweak pointer math to avoid possible integer overflow | d944a10808891d202bb1d5e1998de6e0423af678 | False | ValveSoftware/GameNetworkingSockets | Reliable & unreliable messages over UDP. Robust message fragmentation & reassembly. P2P networking / NAT traversal. Encryption. | 2018-03-21 18:43:20 | 2022-08-12 17:06:45 | ValveSoftware | 6182.0 | 458.0 | SteamNetworkingSocketsLib::CConnectionTransportUDPBase::Received_Data | SteamNetworkingSocketsLib::CConnectionTransportUDPBase::Received_Data( const uint8 * pPkt , int cbPkt , SteamNetworkingMicroseconds usecNow) | ['pPkt', 'cbPkt', 'usecNow'] | void CConnectionTransportUDPBase::Received_Data( const uint8 *pPkt, int cbPkt, SteamNetworkingMicroseconds usecNow )
{
if ( cbPkt < sizeof(UDPDataMsgHdr) )
{
ReportBadUDPPacketFromConnectionPeer( "DataPacket", "Packet of size %d is too small.", cbPkt );
return;
}
// Check cookie
const UDPDataMsgHdr *hdr = (const UDPDataMsgHdr *)pPkt;
if ( LittleDWord( hdr->m_unToConnectionID ) != ConnectionIDLocal() )
{
// Wrong session. It could be an old session, or it could be spoofed.
ReportBadUDPPacketFromConnectionPeer( "DataPacket", "Incorrect connection ID" );
if ( BCheckGlobalSpamReplyRateLimit( usecNow ) )
{
SendNoConnection( LittleDWord( hdr->m_unToConnectionID ), 0 );
}
return;
}
uint16 nWirePktNumber = LittleWord( hdr->m_unSeqNum );
// Check state
switch ( ConnectionState() )
{
case k_ESteamNetworkingConnectionState_Dead:
case k_ESteamNetworkingConnectionState_None:
default:
Assert( false );
return;
case k_ESteamNetworkingConnectionState_ClosedByPeer:
case k_ESteamNetworkingConnectionState_FinWait:
case k_ESteamNetworkingConnectionState_ProblemDetectedLocally:
SendConnectionClosedOrNoConnection();
return;
case k_ESteamNetworkingConnectionState_Connecting:
// Ignore it. We don't have the SteamID of whoever is on the other end yet,
// their encryption keys, etc. The most likely cause is that a server sent
// a ConnectOK, which dropped. So they think we're connected but we don't
// have everything yet.
return;
case k_ESteamNetworkingConnectionState_Linger:
case k_ESteamNetworkingConnectionState_Connected:
case k_ESteamNetworkingConnectionState_FindingRoute: // not used for raw UDP, but might be used for derived class
// We'll process the chunk
break;
}
const uint8 *pIn = pPkt + sizeof(*hdr);
const uint8 *pPktEnd = pPkt + cbPkt;
// Inline stats?
static CMsgSteamSockets_UDP_Stats msgStats;
CMsgSteamSockets_UDP_Stats *pMsgStatsIn = nullptr;
uint32 cbStatsMsgIn = 0;
if ( hdr->m_unMsgFlags & hdr->kFlag_ProtobufBlob )
{
//Msg_Verbose( "Received inline stats from %s", server.m_szName );
pIn = DeserializeVarInt( pIn, pPktEnd, cbStatsMsgIn );
if ( pIn == NULL )
{
ReportBadUDPPacketFromConnectionPeer( "DataPacket", "Failed to varint decode size of stats blob" );
return;
}
if ( pIn + cbStatsMsgIn > pPktEnd )
{
ReportBadUDPPacketFromConnectionPeer( "DataPacket", "stats message size doesn't make sense. Stats message size %d, packet size %d", cbStatsMsgIn, cbPkt );
return;
}
if ( !msgStats.ParseFromArray( pIn, cbStatsMsgIn ) )
{
ReportBadUDPPacketFromConnectionPeer( "DataPacket", "protobuf failed to parse inline stats message" );
return;
}
// Shove sequence number so we know what acks to pend, etc
pMsgStatsIn = &msgStats;
// Advance pointer
pIn += cbStatsMsgIn;
}
const void *pChunk = pIn;
int cbChunk = pPktEnd - pIn;
// Decrypt it, and check packet number
UDPRecvPacketContext_t ctx;
ctx.m_usecNow = usecNow;
ctx.m_pTransport = this;
ctx.m_pStatsIn = pMsgStatsIn;
if ( !m_connection.DecryptDataChunk( nWirePktNumber, cbPkt, pChunk, cbChunk, ctx ) )
return;
// This is a valid packet. P2P connections might want to make a note of this
RecvValidUDPDataPacket( ctx );
// Process plaintext
int usecTimeSinceLast = 0; // FIXME - should we plumb this through so we can measure jitter?
if ( !m_connection.ProcessPlainTextDataChunk( usecTimeSinceLast, ctx ) )
return;
// Process the stats, if any
if ( pMsgStatsIn )
RecvStats( *pMsgStatsIn, usecNow );
} | 383 | True | 1 |
|
CVE-2020-6018 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/ValveSoftware/GameNetworkingSockets/commit/bea84e2844b647532a9b7fbc3a6a8989d66e49e3', 'name': 'https://github.com/ValveSoftware/GameNetworkingSockets/commit/bea84e2844b647532a9b7fbc3a6a8989d66e49e3', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://research.checkpoint.com/2020/game-on-finding-vulnerabilities-in-valves-steam-sockets/', 'name': 'https://research.checkpoint.com/2020/game-on-finding-vulnerabilities-in-valves-steam-sockets/', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-787'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:valvesoftware:game_networking_sockets:*:*:*:*:*:*:*:*', 'versionEndExcluding': '1.2.0', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "Valve's Game Networking Sockets prior to version v1.2.0 improperly handles long encrypted messages in function AES_GCM_DecryptContext::Decrypt() when compiled using libsodium, leading to a Stack-Based Buffer Overflow and resulting in a memory corruption and possibly even a remote code execution."}] | 2022-04-12T16:19Z | 2020-12-02T01:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Fletcher Dunn | 2020-09-03 15:05:55-07:00 | Check if output buffer is too small.
It really seems like libsodium (whose entire purpose is to make crypto
idiot-proof) making me mess with these details is a flaw in the API design.
Also, correct Hungarian. | bea84e2844b647532a9b7fbc3a6a8989d66e49e3 | False | ValveSoftware/GameNetworkingSockets | Reliable & unreliable messages over UDP. Robust message fragmentation & reassembly. P2P networking / NAT traversal. Encryption. | 2018-03-21 18:43:20 | 2022-08-12 17:06:45 | ValveSoftware | 6182.0 | 458.0 | AES_GCM_DecryptContext::Decrypt | AES_GCM_DecryptContext::Decrypt( const void * pEncryptedDataAndTag , size_t cbEncryptedDataAndTag , const void * pIV , void * pPlaintextData , uint32 * pcbPlaintextData , const void * pAdditionalAuthenticationData , size_t cbAuthenticationData) | ['pEncryptedDataAndTag', 'cbEncryptedDataAndTag', 'pIV', 'pPlaintextData', 'pcbPlaintextData', 'pAdditionalAuthenticationData', 'cbAuthenticationData'] | bool AES_GCM_DecryptContext::Decrypt(
const void *pEncryptedDataAndTag, size_t cbEncryptedDataAndTag,
const void *pIV,
void *pPlaintextData, uint32 *pcbPlaintextData,
const void *pAdditionalAuthenticationData, size_t cbAuthenticationData
) {
unsigned long long pcbPlaintextData_longlong;
const int nDecryptResult = crypto_aead_aes256gcm_decrypt_afternm(
static_cast<unsigned char*>( pPlaintextData ), &pcbPlaintextData_longlong,
nullptr,
static_cast<const unsigned char*>( pEncryptedDataAndTag ), cbEncryptedDataAndTag,
static_cast<const unsigned char*>( pAdditionalAuthenticationData ), cbAuthenticationData,
static_cast<const unsigned char*>( pIV ), static_cast<const crypto_aead_aes256gcm_state*>( m_ctx )
);
*pcbPlaintextData = pcbPlaintextData_longlong;
return nDecryptResult == 0;
} | 119 | True | 1 |
|
CVE-2020-6018 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:P | NETWORK | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 7.5 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | HIGH | 9.8 | CRITICAL | 3.9 | 5.9 | False | [{'url': 'https://github.com/ValveSoftware/GameNetworkingSockets/commit/bea84e2844b647532a9b7fbc3a6a8989d66e49e3', 'name': 'https://github.com/ValveSoftware/GameNetworkingSockets/commit/bea84e2844b647532a9b7fbc3a6a8989d66e49e3', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://research.checkpoint.com/2020/game-on-finding-vulnerabilities-in-valves-steam-sockets/', 'name': 'https://research.checkpoint.com/2020/game-on-finding-vulnerabilities-in-valves-steam-sockets/', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-787'}]}] | HIGH | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:valvesoftware:game_networking_sockets:*:*:*:*:*:*:*:*', 'versionEndExcluding': '1.2.0', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "Valve's Game Networking Sockets prior to version v1.2.0 improperly handles long encrypted messages in function AES_GCM_DecryptContext::Decrypt() when compiled using libsodium, leading to a Stack-Based Buffer Overflow and resulting in a memory corruption and possibly even a remote code execution."}] | 2022-04-12T16:19Z | 2020-12-02T01:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Fletcher Dunn | 2020-09-03 15:05:55-07:00 | Check if output buffer is too small.
It really seems like libsodium (whose entire purpose is to make crypto
idiot-proof) making me mess with these details is a flaw in the API design.
Also, correct Hungarian. | bea84e2844b647532a9b7fbc3a6a8989d66e49e3 | False | ValveSoftware/GameNetworkingSockets | Reliable & unreliable messages over UDP. Robust message fragmentation & reassembly. P2P networking / NAT traversal. Encryption. | 2018-03-21 18:43:20 | 2022-08-12 17:06:45 | ValveSoftware | 6182.0 | 458.0 | AES_GCM_EncryptContext::Encrypt | AES_GCM_EncryptContext::Encrypt( const void * pPlaintextData , size_t cbPlaintextData , const void * pIV , void * pEncryptedDataAndTag , uint32 * pcbEncryptedDataAndTag , const void * pAdditionalAuthenticationData , size_t cbAuthenticationData) | ['pPlaintextData', 'cbPlaintextData', 'pIV', 'pEncryptedDataAndTag', 'pcbEncryptedDataAndTag', 'pAdditionalAuthenticationData', 'cbAuthenticationData'] | bool AES_GCM_EncryptContext::Encrypt(
const void *pPlaintextData, size_t cbPlaintextData,
const void *pIV,
void *pEncryptedDataAndTag, uint32 *pcbEncryptedDataAndTag,
const void *pAdditionalAuthenticationData, size_t cbAuthenticationData
) {
unsigned long long pcbEncryptedDataAndTag_longlong = *pcbEncryptedDataAndTag;
crypto_aead_aes256gcm_encrypt_afternm(
static_cast<unsigned char*>( pEncryptedDataAndTag ), &pcbEncryptedDataAndTag_longlong,
static_cast<const unsigned char*>( pPlaintextData ), cbPlaintextData,
static_cast<const unsigned char*>(pAdditionalAuthenticationData), cbAuthenticationData,
nullptr,
static_cast<const unsigned char*>( pIV ),
static_cast<const crypto_aead_aes256gcm_state*>( m_ctx )
);
*pcbEncryptedDataAndTag = pcbEncryptedDataAndTag_longlong;
return true;
} | 116 | True | 1 |
|
CVE-2020-7041 | False | False | False | False | AV:N/AC:L/Au:N/C:N/I:P/A:N | NETWORK | LOW | NONE | NONE | PARTIAL | NONE | 5.0 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:L/A:N | NETWORK | LOW | NONE | NONE | UNCHANGED | NONE | LOW | NONE | 5.3 | MEDIUM | 3.9 | 1.4 | False | [{'url': 'https://github.com/adrienverge/openfortivpn/commit/cd9368c6a1b4ef91d77bb3fdbe2e5bc34aa6f4c4', 'name': 'https://github.com/adrienverge/openfortivpn/commit/cd9368c6a1b4ef91d77bb3fdbe2e5bc34aa6f4c4', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/adrienverge/openfortivpn/issues/536', 'name': 'https://github.com/adrienverge/openfortivpn/issues/536', 'refsource': 'MISC', 'tags': ['Issue Tracking', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-03/msg00009.html', 'name': 'openSUSE-SU-2020:0301', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-03/msg00011.html', 'name': 'openSUSE-SU-2020:0305', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}, {'url': 'https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/SRVVNXCNTNMPCIAZIVR4FAGYCSU53FNA/', 'name': 'FEDORA-2020-42eb8821db', 'refsource': 'FEDORA', 'tags': ['Third Party Advisory']}, {'url': 'https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/FF6HYIBREQGATRM5COF57MRQWKOKCWZ3/', 'name': 'FEDORA-2020-c96ab3c813', 'refsource': 'FEDORA', 'tags': ['Third Party Advisory']}, {'url': 'https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/CKNKSGBVYGRRVRLFEFBEKUEJYJR5LWOF/', 'name': 'FEDORA-2020-dcdffcc368', 'refsource': 'FEDORA', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/adrienverge/openfortivpn/commit/60660e00b80bad0fadcf39aee86f6f8756c94f91', 'name': 'https://github.com/adrienverge/openfortivpn/commit/60660e00b80bad0fadcf39aee86f6f8756c94f91', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-295'}]}] | MEDIUM | [{'operator': 'AND', 'children': [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:openfortivpn_project:openfortivpn:*:*:*:*:*:*:*:*', 'versionEndExcluding': '1.12.0', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': False, 'cpe23Uri': 'cpe:2.3:a:openssl:openssl:*:*:*:*:*:*:*:*', 'versionEndIncluding': '1.0.2', 'cpe_name': []}]}], 'cpe_match': []}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:fedoraproject:fedora:30:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:fedoraproject:fedora:31:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:fedoraproject:fedora:32:*:*:*:*:*:*:*', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.1:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:opensuse:backports_sle:15.0:sp1:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'An issue was discovered in openfortivpn 1.11.0 when used with OpenSSL 1.0.2 or later. tunnel.c mishandles certificate validation because an X509_check_host negative error code is interpreted as a successful return value.'}] | 2020-10-09T15:08Z | 2020-02-27T18:15Z | Improper Certificate Validation | The software does not validate, or incorrectly validates, a certificate. | When a certificate is invalid or malicious, it might allow an attacker to spoof a trusted entity by interfering in the communication path between the host and client. The software might connect to a malicious host while believing it is a trusted host, or the software might be deceived into accepting spoofed data that appears to originate from a trusted host.
| https://cwe.mitre.org/data/definitions/295.html | 0 | Martin Hecht | 2020-02-21 20:53:04+01:00 | correctly check return value of X509_check_host
CVE-2020-7041 incorrect use of X509_check_host (regarding return value)
is fixed with this commit.
The flaw came in with #242 and prevented proper host name verification
when openssl >= 1.0.2 was in use since openfortivpn 1.7.0. | 60660e00b80bad0fadcf39aee86f6f8756c94f91 | False | adrienverge/openfortivpn | Client for PPP+SSL VPN tunnel services | 2015-01-26 17:25:00 | 2022-08-27 09:41:32 | adrienverge | 1856.0 | 270.0 | ssl_verify_cert | ssl_verify_cert( struct tunnel * tunnel) | ['tunnel'] | static int ssl_verify_cert(struct tunnel *tunnel)
{
int ret = -1;
int cert_valid = 0;
unsigned char digest[SHA256LEN];
unsigned int len;
struct x509_digest *elem;
char digest_str[SHA256STRLEN], *subject, *issuer;
char *line;
int i;
X509_NAME *subj;
char common_name[FIELD_SIZE + 1];
SSL_set_verify(tunnel->ssl_handle, SSL_VERIFY_PEER, NULL);
X509 *cert = SSL_get_peer_certificate(tunnel->ssl_handle);
if (cert == NULL) {
log_error("Unable to get gateway certificate.\n");
return 1;
}
subj = X509_get_subject_name(cert);
#ifdef HAVE_X509_CHECK_HOST
// Use OpenSSL native host validation if v >= 1.0.2.
if (X509_check_host(cert, common_name, FIELD_SIZE, 0, NULL))
cert_valid = 1;
#else
// Use explicit Common Name check if native validation not available.
// Note: this will ignore Subject Alternative Name fields.
if (subj
&& X509_NAME_get_text_by_NID(subj, NID_commonName, common_name,
FIELD_SIZE) > 0
&& strncasecmp(common_name, tunnel->config->gateway_host,
FIELD_SIZE) == 0)
cert_valid = 1;
#endif
// Try to validate certificate using local PKI
if (cert_valid
&& SSL_get_verify_result(tunnel->ssl_handle) == X509_V_OK) {
log_debug("Gateway certificate validation succeeded.\n");
ret = 0;
goto free_cert;
}
log_debug("Gateway certificate validation failed.\n");
// If validation failed, check if cert is in the white list
if (X509_digest(cert, EVP_sha256(), digest, &len) <= 0
|| len != SHA256LEN) {
log_error("Could not compute certificate sha256 digest.\n");
goto free_cert;
}
// Encode digest in base16
for (i = 0; i < SHA256LEN; i++)
sprintf(&digest_str[2 * i], "%02x", digest[i]);
digest_str[SHA256STRLEN - 1] = '\0';
// Is it in whitelist?
for (elem = tunnel->config->cert_whitelist; elem != NULL;
elem = elem->next)
if (memcmp(digest_str, elem->data, SHA256STRLEN - 1) == 0)
break;
if (elem != NULL) { // break before end of loop
log_debug("Gateway certificate digest found in white list.\n");
ret = 0;
goto free_cert;
}
subject = X509_NAME_oneline(subj, NULL, 0);
issuer = X509_NAME_oneline(X509_get_issuer_name(cert), NULL, 0);
log_error("Gateway certificate validation failed, and the certificate digest in not in the local whitelist. If you trust it, rerun with:\n");
log_error(" --trusted-cert %s\n", digest_str);
log_error("or add this line to your config file:\n");
log_error(" trusted-cert = %s\n", digest_str);
log_error("Gateway certificate:\n");
log_error(" subject:\n");
for (line = strtok(subject, "/"); line != NULL;
line = strtok(NULL, "/"))
log_error(" %s\n", line);
log_error(" issuer:\n");
for (line = strtok(issuer, "/"); line != NULL;
line = strtok(NULL, "/"))
log_error(" %s\n", line);
log_error(" sha256 digest:\n");
log_error(" %s\n", digest_str);
free_cert:
X509_free(cert);
return ret;
} | 478 | True | 1 |
|
CVE-2020-7042 | False | False | False | False | AV:N/AC:L/Au:N/C:N/I:P/A:N | NETWORK | LOW | NONE | NONE | PARTIAL | NONE | 5.0 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:L/A:N | NETWORK | LOW | NONE | NONE | UNCHANGED | NONE | LOW | NONE | 5.3 | MEDIUM | 3.9 | 1.4 | False | [{'url': 'https://github.com/adrienverge/openfortivpn/commit/cd9368c6a1b4ef91d77bb3fdbe2e5bc34aa6f4c4', 'name': 'https://github.com/adrienverge/openfortivpn/commit/cd9368c6a1b4ef91d77bb3fdbe2e5bc34aa6f4c4', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/adrienverge/openfortivpn/issues/536', 'name': 'https://github.com/adrienverge/openfortivpn/issues/536', 'refsource': 'MISC', 'tags': ['Issue Tracking', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-03/msg00009.html', 'name': 'openSUSE-SU-2020:0301', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-03/msg00011.html', 'name': 'openSUSE-SU-2020:0305', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}, {'url': 'https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/SRVVNXCNTNMPCIAZIVR4FAGYCSU53FNA/', 'name': 'FEDORA-2020-42eb8821db', 'refsource': 'FEDORA', 'tags': ['Third Party Advisory']}, {'url': 'https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/FF6HYIBREQGATRM5COF57MRQWKOKCWZ3/', 'name': 'FEDORA-2020-c96ab3c813', 'refsource': 'FEDORA', 'tags': ['Third Party Advisory']}, {'url': 'https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/CKNKSGBVYGRRVRLFEFBEKUEJYJR5LWOF/', 'name': 'FEDORA-2020-dcdffcc368', 'refsource': 'FEDORA', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/adrienverge/openfortivpn/commit/9eee997d599a89492281fc7ffdd79d88cd61afc3', 'name': 'https://github.com/adrienverge/openfortivpn/commit/9eee997d599a89492281fc7ffdd79d88cd61afc3', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-295'}, {'lang': 'en', 'value': 'CWE-908'}]}] | MEDIUM | [{'operator': 'AND', 'children': [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:openfortivpn_project:openfortivpn:*:*:*:*:*:*:*:*', 'versionEndExcluding': '1.12.0', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': False, 'cpe23Uri': 'cpe:2.3:a:openssl:openssl:*:*:*:*:*:*:*:*', 'versionEndIncluding': '1.0.2', 'cpe_name': []}]}], 'cpe_match': []}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:fedoraproject:fedora:30:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:fedoraproject:fedora:31:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:fedoraproject:fedora:32:*:*:*:*:*:*:*', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.1:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:opensuse:backports_sle:15.0:sp1:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'An issue was discovered in openfortivpn 1.11.0 when used with OpenSSL 1.0.2 or later. tunnel.c mishandles certificate validation because the hostname check operates on uninitialized memory. The outcome is that a valid certificate is never accepted (only a malformed certificate may be accepted).'}] | 2021-07-21T11:39Z | 2020-02-27T18:15Z | Improper Certificate Validation | The software does not validate, or incorrectly validates, a certificate. | When a certificate is invalid or malicious, it might allow an attacker to spoof a trusted entity by interfering in the communication path between the host and client. The software might connect to a malicious host while believing it is a trusted host, or the software might be deceived into accepting spoofed data that appears to originate from a trusted host.
| https://cwe.mitre.org/data/definitions/295.html | 0 | Martin Hecht | 2020-02-21 20:58:11+01:00 | supply proper input buffer to X509_check_host
CVE-2020-7042 use of uninitialized memory in X509_check_host is fixed with
this commit
the uninitialized buffer common_name was passed as argument to X509_check_host
which prevented proper host name validation when openssl >= 1.0.2 was in use.
This came in with #282 which went into openfortivpn 1.7.1.
Unfortunately, this problem has stayed unnoticed because the return value
was not properly checked either (which is a separate issue, with CVE-2020-7041,
and which has been fixed by the previous commit) | 9eee997d599a89492281fc7ffdd79d88cd61afc3 | False | adrienverge/openfortivpn | Client for PPP+SSL VPN tunnel services | 2015-01-26 17:25:00 | 2022-08-27 09:41:32 | adrienverge | 1856.0 | 270.0 | ssl_verify_cert | ssl_verify_cert( struct tunnel * tunnel) | ['tunnel'] | static int ssl_verify_cert(struct tunnel *tunnel)
{
int ret = -1;
int cert_valid = 0;
unsigned char digest[SHA256LEN];
unsigned int len;
struct x509_digest *elem;
char digest_str[SHA256STRLEN], *subject, *issuer;
char *line;
int i;
X509_NAME *subj;
char common_name[FIELD_SIZE + 1];
SSL_set_verify(tunnel->ssl_handle, SSL_VERIFY_PEER, NULL);
X509 *cert = SSL_get_peer_certificate(tunnel->ssl_handle);
if (cert == NULL) {
log_error("Unable to get gateway certificate.\n");
return 1;
}
subj = X509_get_subject_name(cert);
#ifdef HAVE_X509_CHECK_HOST
// Use OpenSSL native host validation if v >= 1.0.2.
// correctly check return value of X509_check_host
if (X509_check_host(cert, common_name, FIELD_SIZE, 0, NULL) == 1)
cert_valid = 1;
#else
// Use explicit Common Name check if native validation not available.
// Note: this will ignore Subject Alternative Name fields.
if (subj
&& X509_NAME_get_text_by_NID(subj, NID_commonName, common_name,
FIELD_SIZE) > 0
&& strncasecmp(common_name, tunnel->config->gateway_host,
FIELD_SIZE) == 0)
cert_valid = 1;
#endif
// Try to validate certificate using local PKI
if (cert_valid
&& SSL_get_verify_result(tunnel->ssl_handle) == X509_V_OK) {
log_debug("Gateway certificate validation succeeded.\n");
ret = 0;
goto free_cert;
}
log_debug("Gateway certificate validation failed.\n");
// If validation failed, check if cert is in the white list
if (X509_digest(cert, EVP_sha256(), digest, &len) <= 0
|| len != SHA256LEN) {
log_error("Could not compute certificate sha256 digest.\n");
goto free_cert;
}
// Encode digest in base16
for (i = 0; i < SHA256LEN; i++)
sprintf(&digest_str[2 * i], "%02x", digest[i]);
digest_str[SHA256STRLEN - 1] = '\0';
// Is it in whitelist?
for (elem = tunnel->config->cert_whitelist; elem != NULL;
elem = elem->next)
if (memcmp(digest_str, elem->data, SHA256STRLEN - 1) == 0)
break;
if (elem != NULL) { // break before end of loop
log_debug("Gateway certificate digest found in white list.\n");
ret = 0;
goto free_cert;
}
subject = X509_NAME_oneline(subj, NULL, 0);
issuer = X509_NAME_oneline(X509_get_issuer_name(cert), NULL, 0);
log_error("Gateway certificate validation failed, and the certificate digest in not in the local whitelist. If you trust it, rerun with:\n");
log_error(" --trusted-cert %s\n", digest_str);
log_error("or add this line to your config file:\n");
log_error(" trusted-cert = %s\n", digest_str);
log_error("Gateway certificate:\n");
log_error(" subject:\n");
for (line = strtok(subject, "/"); line != NULL;
line = strtok(NULL, "/"))
log_error(" %s\n", line);
log_error(" issuer:\n");
for (line = strtok(issuer, "/"); line != NULL;
line = strtok(NULL, "/"))
log_error(" %s\n", line);
log_error(" sha256 digest:\n");
log_error(" %s\n", digest_str);
free_cert:
X509_free(cert);
return ret;
} | 480 | True | 1 |
|
CVE-2020-7042 | False | False | False | False | AV:N/AC:L/Au:N/C:N/I:P/A:N | NETWORK | LOW | NONE | NONE | PARTIAL | NONE | 5.0 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:L/A:N | NETWORK | LOW | NONE | NONE | UNCHANGED | NONE | LOW | NONE | 5.3 | MEDIUM | 3.9 | 1.4 | False | [{'url': 'https://github.com/adrienverge/openfortivpn/commit/cd9368c6a1b4ef91d77bb3fdbe2e5bc34aa6f4c4', 'name': 'https://github.com/adrienverge/openfortivpn/commit/cd9368c6a1b4ef91d77bb3fdbe2e5bc34aa6f4c4', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/adrienverge/openfortivpn/issues/536', 'name': 'https://github.com/adrienverge/openfortivpn/issues/536', 'refsource': 'MISC', 'tags': ['Issue Tracking', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-03/msg00009.html', 'name': 'openSUSE-SU-2020:0301', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-03/msg00011.html', 'name': 'openSUSE-SU-2020:0305', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}, {'url': 'https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/SRVVNXCNTNMPCIAZIVR4FAGYCSU53FNA/', 'name': 'FEDORA-2020-42eb8821db', 'refsource': 'FEDORA', 'tags': ['Third Party Advisory']}, {'url': 'https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/FF6HYIBREQGATRM5COF57MRQWKOKCWZ3/', 'name': 'FEDORA-2020-c96ab3c813', 'refsource': 'FEDORA', 'tags': ['Third Party Advisory']}, {'url': 'https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/CKNKSGBVYGRRVRLFEFBEKUEJYJR5LWOF/', 'name': 'FEDORA-2020-dcdffcc368', 'refsource': 'FEDORA', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/adrienverge/openfortivpn/commit/9eee997d599a89492281fc7ffdd79d88cd61afc3', 'name': 'https://github.com/adrienverge/openfortivpn/commit/9eee997d599a89492281fc7ffdd79d88cd61afc3', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-295'}, {'lang': 'en', 'value': 'CWE-908'}]}] | MEDIUM | [{'operator': 'AND', 'children': [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:openfortivpn_project:openfortivpn:*:*:*:*:*:*:*:*', 'versionEndExcluding': '1.12.0', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': False, 'cpe23Uri': 'cpe:2.3:a:openssl:openssl:*:*:*:*:*:*:*:*', 'versionEndIncluding': '1.0.2', 'cpe_name': []}]}], 'cpe_match': []}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:fedoraproject:fedora:30:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:fedoraproject:fedora:31:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:fedoraproject:fedora:32:*:*:*:*:*:*:*', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.1:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:opensuse:backports_sle:15.0:sp1:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'An issue was discovered in openfortivpn 1.11.0 when used with OpenSSL 1.0.2 or later. tunnel.c mishandles certificate validation because the hostname check operates on uninitialized memory. The outcome is that a valid certificate is never accepted (only a malformed certificate may be accepted).'}] | 2021-07-21T11:39Z | 2020-02-27T18:15Z | Use of Uninitialized Resource | The software uses or accesses a resource that has not been initialized. | When a resource has not been properly initialized, the software may behave unexpectedly. This may lead to a crash or invalid memory access, but the consequences vary depending on the type of resource and how it is used within the software.
| https://cwe.mitre.org/data/definitions/908.html | 0 | Martin Hecht | 2020-02-21 20:58:11+01:00 | supply proper input buffer to X509_check_host
CVE-2020-7042 use of uninitialized memory in X509_check_host is fixed with
this commit
the uninitialized buffer common_name was passed as argument to X509_check_host
which prevented proper host name validation when openssl >= 1.0.2 was in use.
This came in with #282 which went into openfortivpn 1.7.1.
Unfortunately, this problem has stayed unnoticed because the return value
was not properly checked either (which is a separate issue, with CVE-2020-7041,
and which has been fixed by the previous commit) | 9eee997d599a89492281fc7ffdd79d88cd61afc3 | False | adrienverge/openfortivpn | Client for PPP+SSL VPN tunnel services | 2015-01-26 17:25:00 | 2022-08-27 09:41:32 | adrienverge | 1856.0 | 270.0 | ssl_verify_cert | ssl_verify_cert( struct tunnel * tunnel) | ['tunnel'] | static int ssl_verify_cert(struct tunnel *tunnel)
{
int ret = -1;
int cert_valid = 0;
unsigned char digest[SHA256LEN];
unsigned int len;
struct x509_digest *elem;
char digest_str[SHA256STRLEN], *subject, *issuer;
char *line;
int i;
X509_NAME *subj;
char common_name[FIELD_SIZE + 1];
SSL_set_verify(tunnel->ssl_handle, SSL_VERIFY_PEER, NULL);
X509 *cert = SSL_get_peer_certificate(tunnel->ssl_handle);
if (cert == NULL) {
log_error("Unable to get gateway certificate.\n");
return 1;
}
subj = X509_get_subject_name(cert);
#ifdef HAVE_X509_CHECK_HOST
// Use OpenSSL native host validation if v >= 1.0.2.
// correctly check return value of X509_check_host
if (X509_check_host(cert, common_name, FIELD_SIZE, 0, NULL) == 1)
cert_valid = 1;
#else
// Use explicit Common Name check if native validation not available.
// Note: this will ignore Subject Alternative Name fields.
if (subj
&& X509_NAME_get_text_by_NID(subj, NID_commonName, common_name,
FIELD_SIZE) > 0
&& strncasecmp(common_name, tunnel->config->gateway_host,
FIELD_SIZE) == 0)
cert_valid = 1;
#endif
// Try to validate certificate using local PKI
if (cert_valid
&& SSL_get_verify_result(tunnel->ssl_handle) == X509_V_OK) {
log_debug("Gateway certificate validation succeeded.\n");
ret = 0;
goto free_cert;
}
log_debug("Gateway certificate validation failed.\n");
// If validation failed, check if cert is in the white list
if (X509_digest(cert, EVP_sha256(), digest, &len) <= 0
|| len != SHA256LEN) {
log_error("Could not compute certificate sha256 digest.\n");
goto free_cert;
}
// Encode digest in base16
for (i = 0; i < SHA256LEN; i++)
sprintf(&digest_str[2 * i], "%02x", digest[i]);
digest_str[SHA256STRLEN - 1] = '\0';
// Is it in whitelist?
for (elem = tunnel->config->cert_whitelist; elem != NULL;
elem = elem->next)
if (memcmp(digest_str, elem->data, SHA256STRLEN - 1) == 0)
break;
if (elem != NULL) { // break before end of loop
log_debug("Gateway certificate digest found in white list.\n");
ret = 0;
goto free_cert;
}
subject = X509_NAME_oneline(subj, NULL, 0);
issuer = X509_NAME_oneline(X509_get_issuer_name(cert), NULL, 0);
log_error("Gateway certificate validation failed, and the certificate digest in not in the local whitelist. If you trust it, rerun with:\n");
log_error(" --trusted-cert %s\n", digest_str);
log_error("or add this line to your config file:\n");
log_error(" trusted-cert = %s\n", digest_str);
log_error("Gateway certificate:\n");
log_error(" subject:\n");
for (line = strtok(subject, "/"); line != NULL;
line = strtok(NULL, "/"))
log_error(" %s\n", line);
log_error(" issuer:\n");
for (line = strtok(issuer, "/"); line != NULL;
line = strtok(NULL, "/"))
log_error(" %s\n", line);
log_error(" sha256 digest:\n");
log_error(" %s\n", digest_str);
free_cert:
X509_free(cert);
return ret;
} | 480 | True | 1 |
|
CVE-2020-7043 | False | False | False | False | AV:N/AC:L/Au:N/C:P/I:P/A:N | NETWORK | LOW | NONE | PARTIAL | PARTIAL | NONE | 6.4 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:N | NETWORK | LOW | NONE | NONE | UNCHANGED | HIGH | HIGH | NONE | 9.1 | CRITICAL | 3.9 | 5.2 | False | [{'url': 'https://github.com/adrienverge/openfortivpn/commit/cd9368c6a1b4ef91d77bb3fdbe2e5bc34aa6f4c4', 'name': 'https://github.com/adrienverge/openfortivpn/commit/cd9368c6a1b4ef91d77bb3fdbe2e5bc34aa6f4c4', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/adrienverge/openfortivpn/issues/536', 'name': 'https://github.com/adrienverge/openfortivpn/issues/536', 'refsource': 'MISC', 'tags': ['Issue Tracking', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-03/msg00009.html', 'name': 'openSUSE-SU-2020:0301', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}, {'url': 'http://lists.opensuse.org/opensuse-security-announce/2020-03/msg00011.html', 'name': 'openSUSE-SU-2020:0305', 'refsource': 'SUSE', 'tags': ['Mailing List', 'Third Party Advisory']}, {'url': 'https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/SRVVNXCNTNMPCIAZIVR4FAGYCSU53FNA/', 'name': 'FEDORA-2020-42eb8821db', 'refsource': 'FEDORA', 'tags': ['Third Party Advisory']}, {'url': 'https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/FF6HYIBREQGATRM5COF57MRQWKOKCWZ3/', 'name': 'FEDORA-2020-c96ab3c813', 'refsource': 'FEDORA', 'tags': ['Third Party Advisory']}, {'url': 'https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/CKNKSGBVYGRRVRLFEFBEKUEJYJR5LWOF/', 'name': 'FEDORA-2020-dcdffcc368', 'refsource': 'FEDORA', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/adrienverge/openfortivpn/commit/6328a070ddaab16faaf008cb9a8a62439c30f2a8', 'name': 'https://github.com/adrienverge/openfortivpn/commit/6328a070ddaab16faaf008cb9a8a62439c30f2a8', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-295'}]}] | MEDIUM | [{'operator': 'AND', 'children': [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:openfortivpn_project:openfortivpn:*:*:*:*:*:*:*:*', 'versionEndExcluding': '1.12.0', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': False, 'cpe23Uri': 'cpe:2.3:a:openssl:openssl:*:*:*:*:*:*:*:*', 'versionEndExcluding': '1.0.2', 'cpe_name': []}]}], 'cpe_match': []}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:fedoraproject:fedora:30:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:fedoraproject:fedora:31:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:fedoraproject:fedora:32:*:*:*:*:*:*:*', 'cpe_name': []}]}, {'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:opensuse:leap:15.1:*:*:*:*:*:*:*', 'cpe_name': []}, {'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:opensuse:backports_sle:15.0:sp1:*:*:*:*:*:*', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "An issue was discovered in openfortivpn 1.11.0 when used with OpenSSL before 1.0.2. tunnel.c mishandles certificate validation because hostname comparisons do not consider '\\0' characters, as demonstrated by a good.example.com\\x00evil.example.com attack."}] | 2020-10-09T14:58Z | 2020-02-27T18:15Z | Improper Certificate Validation | The software does not validate, or incorrectly validates, a certificate. | When a certificate is invalid or malicious, it might allow an attacker to spoof a trusted entity by interfering in the communication path between the host and client. The software might connect to a malicious host while believing it is a trusted host, or the software might be deceived into accepting spoofed data that appears to originate from a trusted host.
| https://cwe.mitre.org/data/definitions/295.html | 0 | Martin Hecht | 2020-02-21 21:37:06+01:00 | fix TLS Certificate CommonName NULL Byte Vulnerability
CVE-2020-7043 TLS Certificate CommonName NULL Byte Vulnerability is fixed
with this commit
with #8 hostname validation for the certificate was introduced
but unfortunately strncasecmp() was used to compare the byte array
against the expected hostname. This does not correctly treat a CN
which contains a NULL byte. In order to fix this vulnerability
the reference implementation from iSECPartners has been included
into the code. | 6328a070ddaab16faaf008cb9a8a62439c30f2a8 | False | adrienverge/openfortivpn | Client for PPP+SSL VPN tunnel services | 2015-01-26 17:25:00 | 2022-08-27 09:41:32 | adrienverge | 1856.0 | 270.0 | ssl_verify_cert | ssl_verify_cert( struct tunnel * tunnel) | ['tunnel'] | static int ssl_verify_cert(struct tunnel *tunnel)
{
int ret = -1;
int cert_valid = 0;
unsigned char digest[SHA256LEN];
unsigned int len;
struct x509_digest *elem;
char digest_str[SHA256STRLEN], *subject, *issuer;
char *line;
int i;
X509_NAME *subj;
SSL_set_verify(tunnel->ssl_handle, SSL_VERIFY_PEER, NULL);
X509 *cert = SSL_get_peer_certificate(tunnel->ssl_handle);
if (cert == NULL) {
log_error("Unable to get gateway certificate.\n");
return 1;
}
subj = X509_get_subject_name(cert);
#ifdef HAVE_X509_CHECK_HOST
// Use OpenSSL native host validation if v >= 1.0.2.
// compare against gateway_host and correctly check return value
// to fix piror Incorrect use of X509_check_host
if (X509_check_host(cert, tunnel->config->gateway_host,
0, 0, NULL) == 1)
cert_valid = 1;
#else
char common_name[FIELD_SIZE + 1];
// Use explicit Common Name check if native validation not available.
// Note: this will ignore Subject Alternative Name fields.
if (subj
&& X509_NAME_get_text_by_NID(subj, NID_commonName, common_name,
FIELD_SIZE) > 0
&& strncasecmp(common_name, tunnel->config->gateway_host,
FIELD_SIZE) == 0)
cert_valid = 1;
#endif
// Try to validate certificate using local PKI
if (cert_valid
&& SSL_get_verify_result(tunnel->ssl_handle) == X509_V_OK) {
log_debug("Gateway certificate validation succeeded.\n");
ret = 0;
goto free_cert;
}
log_debug("Gateway certificate validation failed.\n");
// If validation failed, check if cert is in the white list
if (X509_digest(cert, EVP_sha256(), digest, &len) <= 0
|| len != SHA256LEN) {
log_error("Could not compute certificate sha256 digest.\n");
goto free_cert;
}
// Encode digest in base16
for (i = 0; i < SHA256LEN; i++)
sprintf(&digest_str[2 * i], "%02x", digest[i]);
digest_str[SHA256STRLEN - 1] = '\0';
// Is it in whitelist?
for (elem = tunnel->config->cert_whitelist; elem != NULL;
elem = elem->next)
if (memcmp(digest_str, elem->data, SHA256STRLEN - 1) == 0)
break;
if (elem != NULL) { // break before end of loop
log_debug("Gateway certificate digest found in white list.\n");
ret = 0;
goto free_cert;
}
subject = X509_NAME_oneline(subj, NULL, 0);
issuer = X509_NAME_oneline(X509_get_issuer_name(cert), NULL, 0);
log_error("Gateway certificate validation failed, and the certificate digest in not in the local whitelist. If you trust it, rerun with:\n");
log_error(" --trusted-cert %s\n", digest_str);
log_error("or add this line to your config file:\n");
log_error(" trusted-cert = %s\n", digest_str);
log_error("Gateway certificate:\n");
log_error(" subject:\n");
for (line = strtok(subject, "/"); line != NULL;
line = strtok(NULL, "/"))
log_error(" %s\n", line);
log_error(" issuer:\n");
for (line = strtok(issuer, "/"); line != NULL;
line = strtok(NULL, "/"))
log_error(" %s\n", line);
log_error(" sha256 digest:\n");
log_error(" %s\n", digest_str);
free_cert:
X509_free(cert);
return ret;
} | 484 | True | 1 |
|
CVE-2020-7670 | False | False | False | False | AV:N/AC:L/Au:N/C:N/I:P/A:N | NETWORK | LOW | NONE | NONE | PARTIAL | NONE | 5.0 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:H/A:N | NETWORK | LOW | NONE | NONE | UNCHANGED | NONE | HIGH | NONE | 7.5 | HIGH | 3.9 | 3.6 | False | [{'url': 'https://snyk.io/vuln/SNYK-RUBY-AGOO-569137', 'name': 'https://snyk.io/vuln/SNYK-RUBY-AGOO-569137', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/ohler55/agoo/issues/88', 'name': 'https://github.com/ohler55/agoo/issues/88', 'refsource': 'MISC', 'tags': []}, {'url': 'https://github.com/ohler55/agoo/commit/23d03535cf7b50d679a60a953a0cae9519a4a130', 'name': 'https://github.com/ohler55/agoo/commit/23d03535cf7b50d679a60a953a0cae9519a4a130', 'refsource': 'MISC', 'tags': []}] | [{'description': [{'lang': 'en', 'value': 'CWE-444'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:ohler:agoo:*:*:*:*:*:ruby:*:*', 'versionEndIncluding': '2.12.3', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'agoo prior to 2.14.0 allows request smuggling attacks where agoo is used as a backend and a frontend proxy also being vulnerable. HTTP pipelining issues and request smuggling attacks might be possible due to incorrect Content-Length and Transfer encoding header parsing. It is possible to conduct HTTP request smuggling attacks where `agoo` is used as part of a chain of backend servers due to insufficient `Content-Length` and `Transfer Encoding` parsing.'}] | 2020-11-17T00:15Z | 2020-06-10T16:15Z | Inconsistent Interpretation of HTTP Requests ('HTTP Request/Response Smuggling') | The product acts as an intermediary HTTP agent
(such as a proxy or firewall) in the data flow between two
entities such as a client and server, but it does not
interpret malformed HTTP requests or responses in ways that
are consistent with how the messages will be processed by
those entities that are at the ultimate destination. |
HTTP requests or responses ("messages") can be
malformed or unexpected in ways that cause web servers or
clients to interpret the messages in different ways than
intermediary HTTP agents such as load balancers, reverse
proxies, web caching proxies, application firewalls,
etc. For example, an adversary may be able to add duplicate
or different header fields that a client or server might
interpret as one set of messages, whereas the intermediary
might interpret the same sequence of bytes as a different
set of messages. For example, discrepancies can arise in
how to handle duplicate headers like two Transfer-encoding
(TE) or two Content-length (CL), or the malicious HTTP
message will have different headers for TE and
CL.
The inconsistent parsing and interpretation of messages
can allow the adversary to "smuggle" a message to the
client/server without the intermediary being aware of it.
This weakness is usually the result of the usage
of outdated or incompatible HTTP protocol versions in the
HTTP agents.
| https://cwe.mitre.org/data/definitions/444.html | 0 | Peter Ohler | 2020-11-07 19:07:47-05:00 | Remote addr (#99)
* REMOTE_ADDR added
* Ready for merge | 23d03535cf7b50d679a60a953a0cae9519a4a130 | False | ohler55/agoo | A High Performance HTTP Server for Ruby | 2017-12-22 03:51:02 | 2022-06-21 23:38:38 | ohler55 | 792.0 | 32.0 | add_header_value | add_header_value( VALUE hh , const char * key , int klen , const char * val , int vlen) | ['hh', 'key', 'klen', 'val', 'vlen'] | add_header_value(VALUE hh, const char *key, int klen, const char *val, int vlen) {
if (sizeof(content_type) - 1 == klen && 0 == strncasecmp(key, content_type, sizeof(content_type) - 1)) {
rb_hash_aset(hh, content_type_val, rb_str_new(val, vlen));
} else if (sizeof(content_length) - 1 == klen && 0 == strncasecmp(key, content_length, sizeof(content_length) - 1)) {
rb_hash_aset(hh, content_length_val, rb_str_new(val, vlen));
} else {
char hkey[1024];
char *k = hkey;
volatile VALUE sval = rb_str_new(val, vlen);
strcpy(hkey, "HTTP_");
k = hkey + 5;
if ((int)(sizeof(hkey) - 5) <= klen) {
klen = sizeof(hkey) - 6;
}
strncpy(k, key, klen);
hkey[klen + 5] = '\0';
//rb_hash_aset(hh, rb_str_new(hkey, klen + 5), sval);
// Contrary to the Rack spec, Rails expects all upper case keys so add those as well.
for (k = hkey + 5; '\0' != *k; k++) {
if ('-' == *k) {
*k = '_';
} else {
*k = toupper(*k);
}
}
rb_hash_aset(hh, rb_str_new(hkey, klen + 5), sval);
}
} | 254 | True | 1 |
|
CVE-2020-7670 | False | False | False | False | AV:N/AC:L/Au:N/C:N/I:P/A:N | NETWORK | LOW | NONE | NONE | PARTIAL | NONE | 5.0 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:H/A:N | NETWORK | LOW | NONE | NONE | UNCHANGED | NONE | HIGH | NONE | 7.5 | HIGH | 3.9 | 3.6 | False | [{'url': 'https://snyk.io/vuln/SNYK-RUBY-AGOO-569137', 'name': 'https://snyk.io/vuln/SNYK-RUBY-AGOO-569137', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/ohler55/agoo/issues/88', 'name': 'https://github.com/ohler55/agoo/issues/88', 'refsource': 'MISC', 'tags': []}, {'url': 'https://github.com/ohler55/agoo/commit/23d03535cf7b50d679a60a953a0cae9519a4a130', 'name': 'https://github.com/ohler55/agoo/commit/23d03535cf7b50d679a60a953a0cae9519a4a130', 'refsource': 'MISC', 'tags': []}] | [{'description': [{'lang': 'en', 'value': 'CWE-444'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:ohler:agoo:*:*:*:*:*:ruby:*:*', 'versionEndIncluding': '2.12.3', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'agoo prior to 2.14.0 allows request smuggling attacks where agoo is used as a backend and a frontend proxy also being vulnerable. HTTP pipelining issues and request smuggling attacks might be possible due to incorrect Content-Length and Transfer encoding header parsing. It is possible to conduct HTTP request smuggling attacks where `agoo` is used as part of a chain of backend servers due to insufficient `Content-Length` and `Transfer Encoding` parsing.'}] | 2020-11-17T00:15Z | 2020-06-10T16:15Z | Inconsistent Interpretation of HTTP Requests ('HTTP Request/Response Smuggling') | The product acts as an intermediary HTTP agent
(such as a proxy or firewall) in the data flow between two
entities such as a client and server, but it does not
interpret malformed HTTP requests or responses in ways that
are consistent with how the messages will be processed by
those entities that are at the ultimate destination. |
HTTP requests or responses ("messages") can be
malformed or unexpected in ways that cause web servers or
clients to interpret the messages in different ways than
intermediary HTTP agents such as load balancers, reverse
proxies, web caching proxies, application firewalls,
etc. For example, an adversary may be able to add duplicate
or different header fields that a client or server might
interpret as one set of messages, whereas the intermediary
might interpret the same sequence of bytes as a different
set of messages. For example, discrepancies can arise in
how to handle duplicate headers like two Transfer-encoding
(TE) or two Content-length (CL), or the malicious HTTP
message will have different headers for TE and
CL.
The inconsistent parsing and interpretation of messages
can allow the adversary to "smuggle" a message to the
client/server without the intermediary being aware of it.
This weakness is usually the result of the usage
of outdated or incompatible HTTP protocol versions in the
HTTP agents.
| https://cwe.mitre.org/data/definitions/444.html | 0 | Peter Ohler | 2020-11-07 19:07:47-05:00 | Remote addr (#99)
* REMOTE_ADDR added
* Ready for merge | 23d03535cf7b50d679a60a953a0cae9519a4a130 | False | ohler55/agoo | A High Performance HTTP Server for Ruby | 2017-12-22 03:51:02 | 2022-06-21 23:38:38 | ohler55 | 792.0 | 32.0 | request_env | request_env( agooReq req , VALUE self) | ['req', 'self'] | request_env(agooReq req, VALUE self) {
if (Qnil == (VALUE)req->env) {
volatile VALUE env = rb_hash_new();
// As described by
// http://www.rubydoc.info/github/rack/rack/master/file/SPEC and
// https://github.com/rack/rack/blob/master/SPEC.
rb_hash_aset(env, request_method_val, req_method(req));
rb_hash_aset(env, script_name_val, req_script_name(req));
rb_hash_aset(env, path_info_val, req_path_info(req));
rb_hash_aset(env, query_string_val, req_query_string(req));
rb_hash_aset(env, server_name_val, req_server_name(req));
rb_hash_aset(env, server_port_val, req_server_port(req));
fill_headers(req, env);
rb_hash_aset(env, rack_version_val, rack_version_val_val);
rb_hash_aset(env, rack_url_scheme_val, req_rack_url_scheme(req));
rb_hash_aset(env, rack_input_val, req_rack_input(req));
rb_hash_aset(env, rack_errors_val, req_rack_errors(req));
rb_hash_aset(env, rack_multithread_val, req_rack_multithread(req));
rb_hash_aset(env, rack_multiprocess_val, Qfalse);
rb_hash_aset(env, rack_run_once_val, Qfalse);
rb_hash_aset(env, rack_logger_val, req_rack_logger(req));
rb_hash_aset(env, rack_upgrade_val, req_rack_upgrade(req));
rb_hash_aset(env, rack_hijackq_val, Qtrue);
// TBD should return IO on #call and set hijack_io on env object that
// has a call method that wraps the req->res->con->sock then set the
// sock to 0 or maybe con. mutex? env[rack.hijack_io] = IO.new(sock,
// "rw") - maybe it works.
//
// set a flag on con to indicate it has been hijacked
// then set sock to 0 in con loop and destroy con
rb_hash_aset(env, rack_hijack_val, self);
rb_hash_aset(env, rack_hijack_io_val, Qnil);
if (agoo_server.rack_early_hints) {
volatile VALUE eh = agoo_early_hints_new(req);
rb_hash_aset(env, early_hints_val, eh);
}
req->env = (void*)env;
}
return (VALUE)req->env;
} | 280 | True | 1 |
|
CVE-2020-7670 | False | False | False | False | AV:N/AC:L/Au:N/C:N/I:P/A:N | NETWORK | LOW | NONE | NONE | PARTIAL | NONE | 5.0 | CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:H/A:N | NETWORK | LOW | NONE | NONE | UNCHANGED | NONE | HIGH | NONE | 7.5 | HIGH | 3.9 | 3.6 | False | [{'url': 'https://snyk.io/vuln/SNYK-RUBY-AGOO-569137', 'name': 'https://snyk.io/vuln/SNYK-RUBY-AGOO-569137', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://github.com/ohler55/agoo/issues/88', 'name': 'https://github.com/ohler55/agoo/issues/88', 'refsource': 'MISC', 'tags': []}, {'url': 'https://github.com/ohler55/agoo/commit/23d03535cf7b50d679a60a953a0cae9519a4a130', 'name': 'https://github.com/ohler55/agoo/commit/23d03535cf7b50d679a60a953a0cae9519a4a130', 'refsource': 'MISC', 'tags': []}] | [{'description': [{'lang': 'en', 'value': 'CWE-444'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:ohler:agoo:*:*:*:*:*:ruby:*:*', 'versionEndIncluding': '2.12.3', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'agoo prior to 2.14.0 allows request smuggling attacks where agoo is used as a backend and a frontend proxy also being vulnerable. HTTP pipelining issues and request smuggling attacks might be possible due to incorrect Content-Length and Transfer encoding header parsing. It is possible to conduct HTTP request smuggling attacks where `agoo` is used as part of a chain of backend servers due to insufficient `Content-Length` and `Transfer Encoding` parsing.'}] | 2020-11-17T00:15Z | 2020-06-10T16:15Z | Inconsistent Interpretation of HTTP Requests ('HTTP Request/Response Smuggling') | The product acts as an intermediary HTTP agent
(such as a proxy or firewall) in the data flow between two
entities such as a client and server, but it does not
interpret malformed HTTP requests or responses in ways that
are consistent with how the messages will be processed by
those entities that are at the ultimate destination. |
HTTP requests or responses ("messages") can be
malformed or unexpected in ways that cause web servers or
clients to interpret the messages in different ways than
intermediary HTTP agents such as load balancers, reverse
proxies, web caching proxies, application firewalls,
etc. For example, an adversary may be able to add duplicate
or different header fields that a client or server might
interpret as one set of messages, whereas the intermediary
might interpret the same sequence of bytes as a different
set of messages. For example, discrepancies can arise in
how to handle duplicate headers like two Transfer-encoding
(TE) or two Content-length (CL), or the malicious HTTP
message will have different headers for TE and
CL.
The inconsistent parsing and interpretation of messages
can allow the adversary to "smuggle" a message to the
client/server without the intermediary being aware of it.
This weakness is usually the result of the usage
of outdated or incompatible HTTP protocol versions in the
HTTP agents.
| https://cwe.mitre.org/data/definitions/444.html | 0 | Peter Ohler | 2020-11-07 19:07:47-05:00 | Remote addr (#99)
* REMOTE_ADDR added
* Ready for merge | 23d03535cf7b50d679a60a953a0cae9519a4a130 | False | ohler55/agoo | A High Performance HTTP Server for Ruby | 2017-12-22 03:51:02 | 2022-06-21 23:38:38 | ohler55 | 792.0 | 32.0 | listen_loop | listen_loop( void * x) | ['x'] | listen_loop(void *x) {
int optval = 1;
struct pollfd pa[100];
struct pollfd *p;
struct _agooErr err = AGOO_ERR_INIT;
struct sockaddr_in client_addr;
int client_sock;
int pcnt = 0;
socklen_t alen = 0;
agooCon con;
int i;
uint64_t cnt = 0;
agooBind b;
for (b = agoo_server.binds, p = pa; NULL != b; b = b->next, p++, pcnt++) {
p->fd = b->fd;
p->events = POLLIN;
p->revents = 0;
}
memset(&client_addr, 0, sizeof(client_addr));
atomic_fetch_add(&agoo_server.running, 1);
while (agoo_server.active) {
if (0 > (i = poll(pa, pcnt, 200))) {
if (EAGAIN == errno) {
continue;
}
agoo_log_cat(&agoo_error_cat, "Server polling error. %s.", strerror(errno));
// Either a signal or something bad like out of memory. Might as well exit.
break;
}
if (0 == i) { // nothing to read
continue;
}
for (b = agoo_server.binds, p = pa; NULL != b; b = b->next, p++) {
if (0 != (p->revents & POLLIN)) {
if (0 > (client_sock = accept(p->fd, (struct sockaddr*)&client_addr, &alen))) {
agoo_log_cat(&agoo_error_cat, "Server with pid %d accept connection failed. %s.", getpid(), strerror(errno));
} else if (NULL == (con = agoo_con_create(&err, client_sock, ++cnt, b))) {
agoo_log_cat(&agoo_error_cat, "Server with pid %d accept connection failed. %s.", getpid(), err.msg);
close(client_sock);
cnt--;
agoo_err_clear(&err);
} else {
int con_cnt;
#ifdef OSX_OS
setsockopt(client_sock, SOL_SOCKET, SO_NOSIGPIPE, &optval, sizeof(optval));
#endif
#ifdef PLATFORM_LINUX
setsockopt(client_sock, IPPROTO_TCP, TCP_QUICKACK, &optval, sizeof(optval));
#endif
fcntl(client_sock, F_SETFL, O_NONBLOCK);
//fcntl(client_sock, F_SETFL, FNDELAY);
setsockopt(client_sock, SOL_SOCKET, SO_KEEPALIVE, &optval, sizeof(optval));
setsockopt(client_sock, IPPROTO_TCP, TCP_NODELAY, &optval, sizeof(optval));
agoo_log_cat(&agoo_con_cat, "Server with pid %d accepted connection %llu on %s [%d]",
getpid(), (unsigned long long)cnt, b->id, con->sock);
con_cnt = atomic_fetch_add(&agoo_server.con_cnt, 1);
if (agoo_server.loop_max > agoo_server.loop_cnt && agoo_server.loop_cnt * LOOP_UP < con_cnt) {
add_con_loop();
}
agoo_queue_push(&agoo_server.con_queue, (void*)con);
}
}
if (0 != (p->revents & (POLLERR | POLLHUP | POLLNVAL))) {
if (0 != (p->revents & (POLLHUP | POLLNVAL))) {
agoo_log_cat(&agoo_error_cat, "Agoo server with pid %d socket on %s closed.", getpid(), b->id);
} else {
agoo_log_cat(&agoo_error_cat, "Agoo server with pid %d socket on %s error.", getpid(), b->id);
}
agoo_server.active = false;
}
p->revents = 0;
}
}
for (b = agoo_server.binds; NULL != b; b = b->next) {
agoo_bind_close(b);
}
atomic_fetch_sub(&agoo_server.running, 1);
return NULL;
} | 620 | True | 1 |
|
CVE-2020-8904 | False | False | False | False | AV:N/AC:L/Au:S/C:N/I:P/A:P | NETWORK | LOW | SINGLE | NONE | PARTIAL | PARTIAL | 5.5 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:C/C:N/I:H/A:H | NETWORK | LOW | LOW | NONE | CHANGED | NONE | HIGH | HIGH | 9.6 | CRITICAL | 3.1 | 5.8 | False | [{'url': 'https://github.com/google/asylo/commit/e582f36ac49ee11a21d23ad6a30c333092e0a94e', 'name': 'https://github.com/google/asylo/commit/e582f36ac49ee11a21d23ad6a30c333092e0a94e', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-119'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:asylo:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.6.0', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'An arbitrary memory overwrite vulnerability in the trusted memory of Asylo exists in versions prior to 0.6.0. As the ecall_restore function fails to validate the range of the output_len pointer, an attacker can manipulate the tmp_output_len value and write to an arbitrary location in the trusted (enclave) memory. We recommend updating Asylo to version 0.6.0 or later.'}] | 2020-08-13T14:42Z | 2020-08-12T19:15Z | Improper Restriction of Operations within the Bounds of a Memory Buffer | The software performs operations on a memory buffer, but it can read from or write to a memory location that is outside of the intended boundary of the buffer. |
Certain languages allow direct addressing of memory locations and do not automatically ensure that these locations are valid for the memory buffer that is being referenced. This can cause read or write operations to be performed on memory locations that may be associated with other variables, data structures, or internal program data.
As a result, an attacker may be able to execute arbitrary code, alter the intended control flow, read sensitive information, or cause the system to crash.
| https://cwe.mitre.org/data/definitions/119.html | 0 | Chong Cai | 2020-07-21 17:22:14-07:00 | Check for output_len range in ecall_restore
This may cause vulnerablity if pointing to trusted memory.
This issue was reported by Qinkun Bao, Zhaofeng Chen, Mingshen Sun, and
Kang Li from Baidu Security.
PiperOrigin-RevId: 322476223
Change-Id: I8a6406e9f07a20582d4387bd9a3469dfa9cbcb12 | e582f36ac49ee11a21d23ad6a30c333092e0a94e | False | google/asylo | An open and flexible framework for developing enclave applications | 2018-04-25 16:43:56 | 2022-04-12 20:00:07 | https://asylo.dev | google | 915.0 | 125.0 | ecall_restore | ecall_restore( const char * input , uint64_t input_len , char ** output , uint64_t * output_len) | ['input', 'input_len', 'output', 'output_len'] | int ecall_restore(const char *input, uint64_t input_len, char **output,
uint64_t *output_len) {
if (!asylo::primitives::TrustedPrimitives::IsOutsideEnclave(input,
input_len)) {
asylo::primitives::TrustedPrimitives::BestEffortAbort(
"ecall_restore: input found to not be in untrusted memory.");
}
int result = 0;
size_t tmp_output_len;
try {
result = asylo::Restore(input, static_cast<size_t>(input_len), output,
&tmp_output_len);
} catch (...) {
LOG(FATAL) << "Uncaught exception in enclave";
}
if (output_len) {
*output_len = static_cast<uint64_t>(tmp_output_len);
}
return result;
} | 116 | True | 1 |
CVE-2020-8905 | False | False | False | False | AV:N/AC:L/Au:S/C:P/I:N/A:N | NETWORK | LOW | SINGLE | PARTIAL | NONE | NONE | 4.0 | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:N | NETWORK | LOW | LOW | NONE | UNCHANGED | HIGH | NONE | NONE | 6.5 | MEDIUM | 2.8 | 3.6 | False | [{'url': 'https://github.com/google/asylo/commit/299f804acbb95a612ab7c504d25ab908aa59ae93', 'name': 'https://github.com/google/asylo/commit/299f804acbb95a612ab7c504d25ab908aa59ae93', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-120'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:asylo:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.6.0', 'cpe_name': []}]}] | [{'lang': 'en', 'value': "A buffer length validation vulnerability in Asylo versions prior to 0.6.0 allows an attacker to read data they should not have access to. The 'enc_untrusted_recvfrom' function generates a return value which is deserialized by 'MessageReader', and copied into three different 'extents'. The length of the third 'extents' is controlled by the outside world, and not verified on copy, allowing the attacker to force Asylo to copy trusted memory data into an untrusted buffer of significantly small length.. We recommend updating Asylo to version 0.6.0 or later."}] | 2020-08-13T14:40Z | 2020-08-12T19:15Z | Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') | The program copies an input buffer to an output buffer without verifying that the size of the input buffer is less than the size of the output buffer, leading to a buffer overflow. | A buffer overflow condition exists when a program attempts to put more data in a buffer than it can hold, or when a program attempts to put data in a memory area outside of the boundaries of a buffer. The simplest type of error, and the most common cause of buffer overflows, is the "classic" case in which the program copies the buffer without restricting how much is copied. Other variants exist, but the existence of a classic overflow strongly suggests that the programmer is not considering even the most basic of security protections.
| https://cwe.mitre.org/data/definitions/120.html | 0 | Chong Cai | 2020-07-21 17:22:45-07:00 | Fix vulnerability in enc_untrusted_recvfrom
Change recvfrom memcpy to check for received_buffer size to avoid
copying extra buffer.
This issue was reported by Qinkun Bao, Zhaofeng Chen, Mingshen Sun, and
Kang Li from Baidu Security.
PiperOrigin-RevId: 322476299
Change-Id: I3606ff9ec51ec7cc4312c7555c645a2fc6e09b21 | 299f804acbb95a612ab7c504d25ab908aa59ae93 | False | google/asylo | An open and flexible framework for developing enclave applications | 2018-04-25 16:43:56 | 2022-04-12 20:00:07 | https://asylo.dev | google | 915.0 | 125.0 | enc_untrusted_recvfrom | enc_untrusted_recvfrom( int sockfd , void * buf , size_t len , int flags , struct sockaddr * src_addr , socklen_t * addrlen) | ['sockfd', 'buf', 'len', 'flags', 'src_addr', 'addrlen'] | ssize_t enc_untrusted_recvfrom(int sockfd, void *buf, size_t len, int flags,
struct sockaddr *src_addr, socklen_t *addrlen) {
int klinux_flags = TokLinuxRecvSendFlag(flags);
if (klinux_flags == 0 && flags != 0) {
errno = EINVAL;
return -1;
}
MessageWriter input;
input.Push<int>(sockfd);
input.Push<uint64_t>(len);
input.Push<int>(klinux_flags);
MessageReader output;
const auto status = NonSystemCallDispatcher(
::asylo::host_call::kRecvFromHandler, &input, &output);
CheckStatusAndParamCount(status, output, "enc_untrusted_recvfrom", 4);
int result = output.next<int>();
int klinux_errno = output.next<int>();
// recvfrom() returns -1 on failure, with errno set to indicate the cause
// of the error.
if (result == -1) {
errno = FromkLinuxErrorNumber(klinux_errno);
return result;
}
auto buffer_received = output.next();
memcpy(buf, buffer_received.data(), len);
// If |src_addr| is not NULL, and the underlying protocol provides the source
// address, this source address is filled in. When |src_addr| is NULL, nothing
// is filled in; in this case, |addrlen| is not used, and should also be NULL.
if (src_addr != nullptr && addrlen != nullptr) {
auto klinux_sockaddr_buf = output.next();
const struct klinux_sockaddr *klinux_addr =
klinux_sockaddr_buf.As<struct klinux_sockaddr>();
FromkLinuxSockAddr(klinux_addr, klinux_sockaddr_buf.size(), src_addr,
addrlen, TrustedPrimitives::BestEffortAbort);
}
return result;
} | 245 | True | 1 |
CVE-2020-8939 | False | False | False | False | AV:L/AC:L/Au:N/C:P/I:N/A:N | LOCAL | LOW | NONE | PARTIAL | NONE | NONE | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:N | LOCAL | LOW | LOW | NONE | UNCHANGED | HIGH | NONE | NONE | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/google/asylo/commit/6ff3b77ffe110a33a2f93848a6333f33616f02c4', 'name': 'https://github.com/google/asylo/commit/6ff3b77ffe110a33a2f93848a6333f33616f02c4', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:asylo:*:*:*:*:*:*:*:*', 'versionEndIncluding': '0.6.0', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'An out of bounds read on the enc_untrusted_inet_ntop function allows an attack to extend the result size that is used by memcpy() to read memory from within the enclave heap. We recommend upgrading past commit 6ff3b77ffe110a33a2f93848a6333f33616f02c4'}] | 2020-12-17T14:04Z | 2020-12-15T15:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Chong Cai | 2020-09-25 14:53:05-07:00 | Check for result size in dst in inet_ntop
PiperOrigin-RevId: 333814318
Change-Id: Id7766ed598809f5df42d457f224d6f3dea06c224 | 6ff3b77ffe110a33a2f93848a6333f33616f02c4 | False | google/asylo | An open and flexible framework for developing enclave applications | 2018-04-25 16:43:56 | 2022-04-12 20:00:07 | https://asylo.dev | google | 915.0 | 125.0 | enc_untrusted_inet_ntop | enc_untrusted_inet_ntop( int af , const void * src , char * dst , socklen_t size) | ['af', 'src', 'dst', 'size'] | const char *enc_untrusted_inet_ntop(int af, const void *src, char *dst,
socklen_t size) {
if (!src || !dst) {
errno = EFAULT;
return nullptr;
}
size_t src_size = 0;
if (af == AF_INET) {
src_size = sizeof(struct in_addr);
} else if (af == AF_INET6) {
src_size = sizeof(struct in6_addr);
} else {
errno = EAFNOSUPPORT;
return nullptr;
}
MessageWriter input;
input.Push<int>(TokLinuxAfFamily(af));
input.PushByReference(Extent{reinterpret_cast<const char *>(src), src_size});
input.Push(size);
MessageReader output;
const auto status = NonSystemCallDispatcher(
::asylo::host_call::kInetNtopHandler, &input, &output);
CheckStatusAndParamCount(status, output, "enc_untrusted_inet_ntop", 2);
auto result = output.next();
int klinux_errno = output.next<int>();
if (result.empty()) {
errno = FromkLinuxErrorNumber(klinux_errno);
return nullptr;
}
memcpy(dst, result.data(),
std::min(static_cast<size_t>(size),
static_cast<size_t>(INET6_ADDRSTRLEN)));
return dst;
} | 237 | True | 1 |
CVE-2020-8942 | False | False | False | False | AV:L/AC:L/Au:N/C:P/I:N/A:N | LOCAL | LOW | NONE | PARTIAL | NONE | NONE | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:N | LOCAL | LOW | LOW | NONE | UNCHANGED | HIGH | NONE | NONE | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/google/asylo/commit/b1d120a2c7d7446d2cc58d517e20a1b184b82200', 'name': 'https://github.com/google/asylo/commit/b1d120a2c7d7446d2cc58d517e20a1b184b82200', 'refsource': 'CONFIRM', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-125'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:asylo:*:*:*:*:*:*:*:*', 'versionEndIncluding': '0.6.0', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'An arbitrary memory read vulnerability in Asylo versions up to 0.6.0 allows an untrusted attacker to make a call to enc_untrusted_read whose return size was not validated against the requrested size. The parameter size is unchecked allowing the attacker to read memory locations outside of the intended buffer size including memory addresses within the secure enclave. We recommend upgrading past commit b1d120a2c7d7446d2cc58d517e20a1b184b82200'}] | 2020-12-17T18:44Z | 2020-12-15T15:15Z | Out-of-bounds Read | The software reads data past the end, or before the beginning, of the intended buffer. | Typically, this can allow attackers to read sensitive information from other memory locations or cause a crash. A crash can occur when the code reads a variable amount of data and assumes that a sentinel exists to stop the read operation, such as a NUL in a string. The expected sentinel might not be located in the out-of-bounds memory, causing excessive data to be read, leading to a segmentation fault or a buffer overflow. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent read operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/125.html | 0 | Chong Cai | 2020-09-25 16:09:54-07:00 | Check for return size in enc_untrusted_read
Check return size does not exceed requested. The returned result and
content still cannot be trusted, but it's expected behavior when not
using a secure file system.
PiperOrigin-RevId: 333827386
Change-Id: I0bdec0aec9356ea333dc8c647eba5d2772875f29 | b1d120a2c7d7446d2cc58d517e20a1b184b82200 | False | google/asylo | An open and flexible framework for developing enclave applications | 2018-04-25 16:43:56 | 2022-04-12 20:00:07 | https://asylo.dev | google | 915.0 | 125.0 | enc_untrusted_read | enc_untrusted_read( int fd , void * buf , size_t count) | ['fd', 'buf', 'count'] | ssize_t enc_untrusted_read(int fd, void *buf, size_t count) {
return static_cast<ssize_t>(EnsureInitializedAndDispatchSyscall(
asylo::system_call::kSYS_read, fd, buf, count));
} | 36 | True | 1 |
CVE-2020-8944 | False | False | False | False | AV:L/AC:L/Au:N/C:N/I:P/A:N | LOCAL | LOW | NONE | NONE | PARTIAL | NONE | 2.1 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:H/A:N | LOCAL | LOW | LOW | NONE | UNCHANGED | NONE | HIGH | NONE | 5.5 | MEDIUM | 1.8 | 3.6 | False | [{'url': 'https://github.com/google/asylo/commit/382da2b8b09cbf928668a2445efb778f76bd9c8a', 'name': 'https://github.com/google/asylo/commit/382da2b8b09cbf928668a2445efb778f76bd9c8a', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-787'}]}] | LOW | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:asylo:*:*:*:*:*:*:*:*', 'versionEndIncluding': '0.6.0', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'An arbitrary memory write vulnerability in Asylo versions up to 0.6.0 allows an untrusted attacker to make a call to ecall_restore using the attribute output which fails to check the range of a pointer. An attacker can use this pointer to write to arbitrary memory addresses including those within the secure enclave We recommend upgrading past commit 382da2b8b09cbf928668a2445efb778f76bd9c8a'}] | 2020-12-17T18:20Z | 2020-12-15T15:15Z | Out-of-bounds Write | The software writes data past the end, or before the beginning, of the intended buffer. | Typically, this can result in corruption of data, a crash, or code execution. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer. A subsequent write operation then produces undefined or unexpected results.
| https://cwe.mitre.org/data/definitions/787.html | 0 | Chong Cai | 2020-09-28 16:49:54-07:00 | Check output of ecall_restore is outside enclave
PiperOrigin-RevId: 334265380
Change-Id: Ifbaead6bce56f01b2a4d69f53ca508d0138f6f61 | 382da2b8b09cbf928668a2445efb778f76bd9c8a | False | google/asylo | An open and flexible framework for developing enclave applications | 2018-04-25 16:43:56 | 2022-04-12 20:00:07 | https://asylo.dev | google | 915.0 | 125.0 | ecall_restore | ecall_restore( const char * input , uint64_t input_len , char ** output , uint64_t * output_len) | ['input', 'input_len', 'output', 'output_len'] | int ecall_restore(const char *input, uint64_t input_len, char **output,
uint64_t *output_len) {
if (!asylo::primitives::TrustedPrimitives::IsOutsideEnclave(input,
input_len) ||
!asylo::primitives::TrustedPrimitives::IsOutsideEnclave(
output_len, sizeof(uint64_t))) {
asylo::primitives::TrustedPrimitives::BestEffortAbort(
"ecall_restore: input/output found to not be in untrusted memory.");
}
int result = 0;
size_t tmp_output_len;
try {
result = asylo::Restore(input, static_cast<size_t>(input_len), output,
&tmp_output_len);
} catch (...) {
LOG(FATAL) << "Uncaught exception in enclave";
}
if (output_len) {
*output_len = static_cast<uint64_t>(tmp_output_len);
}
return result;
} | 133 | True | 1 |
CVE-2021-22550 | False | False | False | False | AV:L/AC:L/Au:N/C:P/I:P/A:P | LOCAL | LOW | NONE | PARTIAL | PARTIAL | PARTIAL | 4.6 | CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H | LOCAL | LOW | LOW | NONE | UNCHANGED | HIGH | HIGH | HIGH | 7.8 | HIGH | 1.8 | 5.9 | False | [{'url': 'https://github.com/google/asylo/commit/a47ef55db2337d29de19c50cd29b0deb2871d31c', 'name': 'https://github.com/google/asylo/commit/a47ef55db2337d29de19c50cd29b0deb2871d31c', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}] | [{'description': [{'lang': 'en', 'value': 'CWE-668'}]}] | MEDIUM | [{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:google:asylo:*:*:*:*:*:*:*:*', 'versionEndExcluding': '0.6.3', 'cpe_name': []}]}] | [{'lang': 'en', 'value': 'An attacker can modify the pointers in enclave memory to overwrite arbitrary memory addresses within the secure enclave. It is recommended to update past 0.6.3 or git commit https://github.com/google/asylo/commit/a47ef55db2337d29de19c50cd29b0deb2871d31c'}] | 2021-06-22T15:37Z | 2021-06-08T14:15Z | Exposure of Resource to Wrong Sphere | The product exposes a resource to the wrong control sphere, providing unintended actors with inappropriate access to the resource. |
Resources such as files and directories may be inadvertently exposed through mechanisms such as insecure permissions, or when a program accidentally operates on the wrong object. For example, a program may intend that private files can only be provided to a specific user. This effectively defines a control sphere that is intended to prevent attackers from accessing these private files. If the file permissions are insecure, then parties other than the user will be able to access those files.
A separate control sphere might effectively require that the user can only access the private files, but not any other files on the system. If the program does not ensure that the user is only requesting private files, then the user might be able to access other files on the system.
In either case, the end result is that a resource has been exposed to the wrong party.
| https://cwe.mitre.org/data/definitions/668.html | 0 | Chong Cai | 2021-02-19 13:42:29-08:00 | Fix vulnerability in UntrustedCacheMalloc
The pointer array is stored in untrusted memory, so we cannot trust the
value even after validation. We should validate the pointer is pointing
to untrusted memory after it's stored inside the enclave.
PiperOrigin-RevId: 358474391
Change-Id: I63cf6c251bdaf1b491dbf06cc0dcf77f7b141756 | a47ef55db2337d29de19c50cd29b0deb2871d31c | False | google/asylo | An open and flexible framework for developing enclave applications | 2018-04-25 16:43:56 | 2022-04-12 20:00:07 | https://asylo.dev | google | 915.0 | 125.0 | asylo::UntrustedCacheMalloc::GetBuffer | asylo::UntrustedCacheMalloc::GetBuffer() | [] | void *UntrustedCacheMalloc::GetBuffer() {
void **buffers = nullptr;
void *buffer;
bool is_pool_empty;
{
LockGuard spin_lock(&lock_);
is_pool_empty = buffer_pool_.empty();
if (is_pool_empty) {
buffers =
primitives::AllocateUntrustedBuffers(kPoolIncrement, kPoolEntrySize);
for (int i = 0; i < kPoolIncrement; i++) {
if (!buffers[i] ||
!TrustedPrimitives::IsOutsideEnclave(buffers[i], kPoolEntrySize)) {
abort();
}
buffer_pool_.push(buffers[i]);
}
}
buffer = buffer_pool_.top();
buffer_pool_.pop();
busy_buffers_.insert(buffer);
}
if (is_pool_empty) {
// Free memory held by the array of buffer pointers returned by
// AllocateUntrustedBuffers.
Free(buffers);
}
return buffer;
} | 142 | True | 1 |
Subsets and Splits