markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
hash
stringlengths
32
32
Creating a table From here we need to create a first table. Let's recreate the Person table from the SW Carpentry db lesson, topic 1.
%%sql CREATE TABLE Person (ident CHAR(10), personal CHAR(25), family CHAR(25)); %sql SHOW TABLES; %sql DESCRIBE Person;
lectures/week-03/sql-demo.ipynb
dchud/warehousing-course
cc0-1.0
c52d8668503750e41fdab243eaaac393
Inserting data Okay then, let's insert the sample data:
%%sql INSERT INTO Person VALUES ("dyer", "William", "Dyer"), ("pb", "Frank", "Pabodie"), ("lake", "Anderson", "Lake"), ("roe", "Valentina", "Roerich"), ("danforth", "Frank", "Danforth") ;
lectures/week-03/sql-demo.ipynb
dchud/warehousing-course
cc0-1.0
bfe5128d0f9506bbbd4d3fb7718131b2
Selecting data Okay, now we're cooking. There's data in the Person table, so we can start to SELECT it.
%sql SELECT * FROM Person; %sql SELECT * FROM Person WHERE personal = "Frank";
lectures/week-03/sql-demo.ipynb
dchud/warehousing-course
cc0-1.0
329c5d7e2960cd6650cbfbdf5c1fdeca
Accessing data from Python One of the great things about ipython-sql is it marshalls all the data into Python objects for you. For example, to get the result data into a Python object, grab it from _:
result = _ print result
lectures/week-03/sql-demo.ipynb
dchud/warehousing-course
cc0-1.0
1cc0b837efd2a4f7e9cd81fd25f56080
You can even assign it to a Pandas dataframe:
df = result.DataFrame() df
lectures/week-03/sql-demo.ipynb
dchud/warehousing-course
cc0-1.0
c98be037493afd1286283741d6447b72
Cleaning up If you were just doing a little exploring and wish to clean up, it's easy to get rid of tables and databases. NOTE: these are permanent actions. Only do them if you know you don't need them any longer. To get rid of a table, use DROP TABLE:
%sql DROP TABLE Person; %sql SHOW TABLES;
lectures/week-03/sql-demo.ipynb
dchud/warehousing-course
cc0-1.0
b35ae69f9be22eb559b2ac3b4bb3e005
And to get rid of a whole database, use DROP DATABASE:
%sql DROP DATABASE week3demo; %sql SHOW DATABASES;
lectures/week-03/sql-demo.ipynb
dchud/warehousing-course
cc0-1.0
a663de5e253350b24f910b1c306096ec
We can use gradient descent to minimize a cost function, thereby optimizing our weights. ANNs in Sklearn Multi-layer Perceptron (MLP) models in sklearn The advantages of MLP are: - Capability to learn non-linear models. - Capability to learn models in real-time (on-line learning) using partial_fit. The disadvantages of MLP include: - MLP with hidden layers have a non-convex loss function where there exists more than one local minimum. Therefore different random weight initializations can lead to different validation accuracy. - MLP requires tuning a number of hyperparameters such as the number of hidden neurons, layers, and iterations. - MLP is sensitive to feature scaling.
# build simple neural net with sklearn: An "OR" gate from sklearn.neural_network import MLPClassifier X = [[0., 0.], [1., 1.], [1., 0.], [0., 1.]] y = [0, 1, 1, 1] clf = MLPClassifier(hidden_layer_sizes=(5,2), solver='lbfgs', random_state=42) clf.fit(X,y) # predict new observations clf.predict([[0,1]]) # find parameters print([coef.shape for coef in clf.coefs_]) clf.coefs_ clf.predict([[2,2]]) clf.predict([[-2,2]]) clf.predict([[-2,-2]])
Projects/Project5/NeuralNetSum.ipynb
ptpro3/ptpro3.github.io
mit
960e028ef77ee340056533b684aa5259
Question 4 Write a function to update the dataframe to include a new column called "Points" which is a weighted value where each gold medal counts for 3 points, silver medals for 2 points, and bronze mdeals for 1 point. The function should return only the column (a Series object) which you created. This function should return a Series named Points of length 146
def answer_four(): return "YOUR ANSWER HERE"
intro-python-data-science/course1_downloads/Assignment 2.ipynb
joaoandre/algorithms
mit
670aa9bbd9d4d91dfcf32090ef81cf87
Question 6 Only looking at the three most populous counties for each state, what are the three most populous states (in order of highest population to lowest population)? This function should return a list of string values.
def answer_six(): return "YOUR ANSWER HERE"
intro-python-data-science/course1_downloads/Assignment 2.ipynb
joaoandre/algorithms
mit
ac727d489e58d17a4f96a683d1881b62
Construction de la dataframe et rΓ©alisation des graphiques
simulated_variables = ['coicop12_{}'.format(coicop12_index) for coicop12_index in range(1, 13)] for year in [2000, 2005, 2011]: survey_scenario = SurveyScenario.create(year = year) pivot_table = pandas.DataFrame() for values in simulated_variables: pivot_table = pandas.concat([ pivot_table, survey_scenario.compute_pivot_table(values = [values], columns = ['niveau_vie_decile']) ]) df = pivot_table.T df['depenses_tot'] = df[['coicop12_{}'.format(i) for i in range(1, 13)]].sum(axis = 1) for i in range(1, 13): df['part_coicop12_{}'.format(i)] = \ df['coicop12_{}'.format(i)] / df['depenses_tot'] print 'Profil de la consommation des mΓ©nages en {}'.format(year) graph_builder_bar(df[['part_coicop12_{}'.format(i) for i in range(1, 13)]])
openfisca_france_indirect_taxation/examples/notebooks/consommations_coicop_par_decile.ipynb
openfisca/openfisca-france-indirect-taxation
agpl-3.0
94cd996ac00a532c7059e7949de78d83
ν™•μž₯ μœ ν˜• <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/guide/extension_type"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.orgμ—μ„œ 보기</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/guide/extension_type.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colabμ—μ„œ μ‹€ν–‰</a></td> <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/guide/extension_type.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHubμ—μ„œ μ†ŒμŠ€ 보기</a></td> <td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/guide/extension_type.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">λ…ΈνŠΈλΆ λ‹€μš΄λ‘œλ“œ</a></td> </table> μ„€μ •
!pip install -q tf_nightly import tensorflow as tf import numpy as np from typing import Tuple, List, Mapping, Union, Optional import tempfile
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
a0e7b5a82c3cd3bd3ae8992b96c26fa6
ν™•μž₯ μœ ν˜• μ‚¬μš©μž μ •μ˜ μœ ν˜•μ„ μ‚¬μš©ν•˜λ©΄ ν”„λ‘œμ νŠΈλ₯Ό 더 읽기 쉽고 λͺ¨λ“ˆμ‹μœΌλ‘œ μœ μ§€ 관리할 수 μžˆμŠ΅λ‹ˆλ‹€. κ·ΈλŸ¬λ‚˜ λŒ€λΆ€λΆ„μ˜ TensorFlow APIλŠ” μ‚¬μš©μž μ •μ˜ Python μœ ν˜•μ— λŒ€ν•œ 지원이 맀우 μ œν•œμ μž…λ‹ˆλ‹€. 이것은 (예 λͺ¨λ‘ 높은 μˆ˜μ€€μ˜ APIλ₯Ό 포함 Keras , tf.function , tf.SavedModel (μ˜ˆλ‘œμ„œ ν•˜μœ„ 레벨의 API) tf.while_loop 및 tf.concat ). TensorFlow ν™•μž₯ μœ ν˜• 을 μ‚¬μš©ν•˜μ—¬ TensorFlow의 API와 μ›ν™œν•˜κ²Œ μž‘λ™ν•˜λŠ” μ‚¬μš©μž μ •μ˜ 객체 지ν–₯ μœ ν˜•μ„ 생성할 수 μžˆμŠ΅λ‹ˆλ‹€. tf.experimental.ExtensionType 을 기본으둜 ν•˜λŠ” Python 클래슀λ₯Ό μ •μ˜ν•˜κ³  μœ ν˜• 주석 을 μ‚¬μš©ν•˜μ—¬ 각 ν•„λ“œμ˜ μœ ν˜•μ„ μ§€μ •ν•˜λ©΄ λ©λ‹ˆλ‹€.
class TensorGraph(tf.experimental.ExtensionType): """A collection of labeled nodes connected by weighted edges.""" edge_weights: tf.Tensor # shape=[num_nodes, num_nodes] node_labels: Mapping[str, tf.Tensor] # shape=[num_nodes]; dtype=any class MaskedTensor(tf.experimental.ExtensionType): """A tensor paired with a boolean mask, indicating which values are valid.""" values: tf.Tensor mask: tf.Tensor # shape=values.shape; false for missing/invalid values. class CSRSparseMatrix(tf.experimental.ExtensionType): """Compressed sparse row matrix (https://en.wikipedia.org/wiki/Sparse_matrix).""" values: tf.Tensor # shape=[num_nonzero]; dtype=any col_index: tf.Tensor # shape=[num_nonzero]; dtype=int64 row_index: tf.Tensor # shape=[num_rows+1]; dtype=int64
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
6b25cce00c7eb8b5ecdaa056199448f2
tf.experimental.ExtensionType κΈ°λ³Έ ν΄λž˜μŠ€λŠ” ν‘œμ€€ Python 라이브러리의 typing.NamedTuple 및 @dataclasses.dataclass 와 μœ μ‚¬ν•˜κ²Œ μž‘λ™ν•©λ‹ˆλ‹€. 특히 ν•„λ“œ μœ ν˜• 주석을 기반으둜 μƒμ„±μžμ™€ 특수 λ©”μ„œλ“œ(예: __repr__ 및 __eq__ 일반적으둜 ν™•μž₯ μœ ν˜•μ€ λ‹€μŒ 두 가지 λ²”μ£Ό 쀑 ν•˜λ‚˜λ‘œ λΆ„λ₯˜λ˜λŠ” κ²½ν–₯이 μžˆμŠ΅λ‹ˆλ‹€. κ΄€λ ¨ κ°’μ˜ μ»¬λ ‰μ…˜μ„ κ·Έλ£Ήν™”ν•˜κ³  ν•΄λ‹Ή 값을 기반으둜 μœ μš©ν•œ μž‘μ—…μ„ μ œκ³΅ν•  수 μžˆλŠ” 데이터 ꡬ쑰. 데이터 κ΅¬μ‘°λŠ” μƒλ‹Ήνžˆ 일반적일 수 μžˆμŠ΅λ‹ˆλ‹€(예 TensorGraph 예). λ˜λŠ” νŠΉμ • λͺ¨λΈμ— κ³ λ„λ‘œ λ§žμΆ€ν™”λ  수 μžˆμŠ΅λ‹ˆλ‹€. "Tensor"의 κ°œλ…μ„ μ „λ¬Έν™”ν•˜κ±°λ‚˜ ν™•μž₯ν•˜λŠ” Tensor와 μœ μ‚¬ν•œ μœ ν˜•μž…λ‹ˆλ‹€. 이 λ²”μ£Όμ˜ μœ ν˜•μ—λŠ” rank , shape 및 일반적으둜 dtype . tf.stack , tf.add λ˜λŠ” tf.matmul )κ³Ό ν•¨κ»˜ μ‚¬μš©ν•˜λŠ” 것이 μ’‹μŠ΅λ‹ˆλ‹€. MaskedTensor 및 CSRSparseMatrix λŠ” ν…μ„œ μœ μ‚¬ μœ ν˜•μ˜ μ˜ˆμž…λ‹ˆλ‹€. μ§€μ›λ˜λŠ” API ν™•μž₯ μœ ν˜•μ€ λ‹€μŒ TensorFlow APIμ—μ„œ μ§€μ›λ©λ‹ˆλ‹€. Keras Models 및 Layers λŒ€ν•œ μž…λ ₯ 및 좜λ ₯으둜 μ‚¬μš©ν•  수 μžˆμŠ΅λ‹ˆλ‹€. tf.data.Dataset : ν™•μž₯ μœ ν˜•μ€ Datasets Iterators μ˜ν•΄ λ°˜ν™˜λ©λ‹ˆλ‹€. Tensorflow ν—ˆλΈŒ tf.hub λͺ¨λ“ˆμ˜ μž…λ ₯ 및 좜λ ₯으둜 μ‚¬μš©ν•  수 μžˆμŠ΅λ‹ˆλ‹€. SavedModel SavedModel ν•¨μˆ˜μ— λŒ€ν•œ μž…λ ₯ 및 좜λ ₯으둜 μ‚¬μš©ν•  수 μžˆμŠ΅λ‹ˆλ‹€. tf.function @tf.function λ°μ½”λ ˆμ΄ν„°λ‘œ λž˜ν•‘λœ ν•¨μˆ˜μ˜ 인수 및 λ°˜ν™˜ κ°’μœΌλ‘œ μ‚¬μš©ν•  수 μžˆμŠ΅λ‹ˆλ‹€. while 루프 : ν™•μž₯ μœ ν˜•μ€ tf.while_loop μ—μ„œ 루프 λ³€μˆ˜λ‘œ μ‚¬μš©ν•  수 있으며 while 루프 본문에 λŒ€ν•œ 인수 및 λ°˜ν™˜ κ°’μœΌλ‘œ μ‚¬μš©ν•  수 μžˆμŠ΅λ‹ˆλ‹€. conditionals tf.cond 및 tf.case μ‚¬μš©ν•˜μ—¬ μ‘°κ±΄λΆ€λ‘œ 선택할 수 μžˆμŠ΅λ‹ˆλ‹€. py_function : ν™•μž₯ μœ ν˜•μ„ 인수둜 μ‚¬μš©ν•  수 있고 func μΈμˆ˜μ— tf.py_function λ°˜ν™˜ν•  수 μžˆμŠ΅λ‹ˆλ‹€. Tensor ops tf.matmul , tf.gather 및 tf.reduce_sum )을 ν—ˆμš©ν•˜λŠ” λŒ€λΆ€λΆ„μ˜ TensorFlow μž‘μ—…μ„ μ§€μ›ν•˜λ„λ‘ ν™•μž₯될 수 μžˆμŠ΅λ‹ˆλ‹€. μžμ„Έν•œ λ‚΄μš©μ€ μ•„λž˜μ˜ " λ””μŠ€νŒ¨μΉ˜ " μ„Ήμ…˜μ„ μ°Έμ‘°ν•˜μ‹­μ‹œμ˜€. 배포 μ „λž΅ : ν™•μž₯ μœ ν˜•μ„ λ³΅μ œλ³Έλ‹Ή κ°’μœΌλ‘œ μ‚¬μš©ν•  수 μžˆμŠ΅λ‹ˆλ‹€. μžμ„Έν•œ λ‚΄μš©μ€ μ•„λž˜ "ExtensionTypesλ₯Ό μ§€μ›ν•˜λŠ” TensorFlow API" μ„Ήμ…˜μ„ μ°Έμ‘°ν•˜μ„Έμš”. μš”κ΅¬ 사항 ν•„λ“œ μœ ν˜• λͺ¨λ“  ν•„λ“œ(일λͺ… μΈμŠ€ν„΄μŠ€ λ³€μˆ˜)λ₯Ό μ„ μ–Έν•΄μ•Ό ν•˜λ©° 각 ν•„λ“œμ— μœ ν˜• 주석을 μ œκ³΅ν•΄μ•Ό ν•©λ‹ˆλ‹€. λ‹€μŒ μœ ν˜• 주석이 μ§€μ›λ©λ‹ˆλ‹€. μœ ν˜• | μ˜ˆμ‹œ --- | --- 파이썬 μ •μˆ˜ | i: int 파이썬 수레 | f: float 파이썬 λ¬Έμžμ—΄ | s: str 파이썬 λΆ€μšΈ | b: bool 파이썬 μ—†μŒ | n: None ν…μ„œ λͺ¨μ–‘ | shape: tf.TensorShape ν…μ„œ dtypes | dtype: tf.DType ν…μ„œ | t: tf.Tensor ν™•μž₯ μœ ν˜• | mt: MyMaskedTensor λΉ„μ •ν˜• ν…μ„œ | rt: tf.RaggedTensor ν¬μ†Œ ν…μ„œ | st: tf.SparseTensor μΈλ±μ‹±λœ 슬라이슀 | s: tf.IndexedSlices 선택적 ν…μ„œ | o: tf.experimental.Optional μœ ν˜• μ‘°ν•© | int_or_float: typing.Union[int, float] νŠœν”Œ | params: typing.Tuple[int, float, tf.Tensor, int] κ°€λ³€ 길이 νŠœν”Œ | lengths: typing.Tuple[int, ...] 맀핑 | tags: typing.Mapping[str, tf.Tensor] 선택적 κ°’ | weight: typing.Optional[tf.Tensor] κ°€λ³€μ„± ν™•μž₯ μœ ν˜•μ€ λ³€κ²½ λΆˆκ°€λŠ₯ν•΄μ•Ό ν•©λ‹ˆλ‹€. μ΄λ ‡κ²Œ ν•˜λ©΄ TensorFlow의 κ·Έλž˜ν”„ 좔적 λ©”μ»€λ‹ˆμ¦˜μœΌλ‘œ μ μ ˆν•˜κ²Œ 좔적할 수 μžˆμŠ΅λ‹ˆλ‹€. ν™•μž₯ μœ ν˜• 값을 λ³€κ²½ν•˜λ €λŠ” 경우 값을 λ³€ν™˜ν•˜λŠ” λ©”μ„œλ“œλ₯Ό λŒ€μ‹  μ •μ˜ν•˜λŠ” 것이 μ’‹μŠ΅λ‹ˆλ‹€. 예λ₯Ό λ“€μ–΄ MaskedTensor λ₯Ό λ³€κ²½ν•˜κΈ° μœ„ν•΄ set_mask MaskedTensor λ₯Ό λ°˜ν™˜ν•˜λŠ” replace_mask λ©”μ„œλ“œλ₯Ό μ •μ˜ν•  수 μžˆμŠ΅λ‹ˆλ‹€.
class MaskedTensor(tf.experimental.ExtensionType): values: tf.Tensor mask: tf.Tensor def replace_mask(self, new_mask): self.values.shape.assert_is_compatible_with(new_mask.shape) return MaskedTensor(self.values, new_mask)
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
cb3a8bc3a0c4ceebdeb9156fc79a0b3b
ExtensionType μΆ”κ°€ν•œ κΈ°λŠ₯ ExtensionType κΈ°λ³Έ ν΄λž˜μŠ€λŠ” λ‹€μŒ κΈ°λŠ₯을 μ œκ³΅ν•©λ‹ˆλ‹€. μƒμ„±μž( __init__ ). 인쇄 κ°€λŠ₯ν•œ ν‘œν˜„ 방법( __repr__ ). 등식 및 뢀등식 μ—°μ‚°μž( __eq__ ). μœ νš¨μ„± 검사 방법( __validate__ ). κ°•μ œ λΆˆλ³€μ„±. μ€‘μ²©λœ TypeSpec . ν…μ„œ API λ””μŠ€νŒ¨μΉ˜ 지원. 이 κΈ°λŠ₯을 μ‚¬μš©μž μ •μ˜ν•˜λŠ” 방법에 λŒ€ν•œ μžμ„Έν•œ λ‚΄μš©μ€ μ•„λž˜μ˜ "ExtensionType μ‚¬μš©μž μ •μ˜" μ„Ήμ…˜μ„ μ°Έμ‘°ν•˜μ‹­μ‹œμ˜€. κ±΄μ„€μž ExtensionType 에 μ˜ν•΄ μΆ”κ°€λœ μƒμ„±μžλŠ” 각 ν•„λ“œλ₯Ό λͺ…λͺ…λœ 인수둜 μ‚¬μš©ν•©λ‹ˆλ‹€(클래슀 μ •μ˜μ— λ‚˜μ—΄λœ μˆœμ„œλŒ€λ‘œ). 이 μƒμ„±μžλŠ” 각 λ§€κ°œλ³€μˆ˜λ₯Ό μœ ν˜• κ²€μ‚¬ν•˜κ³  ν•„μš”ν•œ 경우 λ³€ν™˜ν•©λ‹ˆλ‹€. 특히, Tensor tf.convert_to_tensor μ‚¬μš©ν•˜μ—¬ λ³€ν™˜λ©λ‹ˆλ‹€. Tuple ν•„λ“œλ‘œ λ³€ν™˜λ©λ‹ˆλ‹€ tuple 의; Mapping ν•„λ“œλŠ” λ³€κ²½ν•  수 μ—†λŠ” μ‚¬μ „μœΌλ‘œ λ³€ν™˜λ©λ‹ˆλ‹€.
class MaskedTensor(tf.experimental.ExtensionType): values: tf.Tensor mask: tf.Tensor # Constructor takes one parameter for each field. mt = MaskedTensor(values=[[1, 2, 3], [4, 5, 6]], mask=[[True, True, False], [True, False, True]]) # Fields are type-checked and converted to the declared types. # E.g., mt.values is converted to a Tensor. print(mt.values)
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
9ea3cadada933962a4aca15a33f33926
ν•„λ“œ 값을 μ„ μ–Έλœ μœ ν˜•μœΌλ‘œ λ³€ν™˜ν•  수 μ—†λŠ” 경우 μƒμ„±μžλŠ” TypeError
try: MaskedTensor([1, 2, 3], None) except TypeError as e: print(f"Got expected TypeError: {e}")
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
62f282d7d7a885aea66a30f0871929a9
ν•„λ“œμ˜ 기본값은 클래슀 μˆ˜μ€€μ—μ„œ 값을 μ„€μ •ν•˜μ—¬ 지정할 수 μžˆμŠ΅λ‹ˆλ‹€.
class Pencil(tf.experimental.ExtensionType): color: str = "black" has_erasor: bool = True length: tf.Tensor = 1.0 Pencil() Pencil(length=0.5, color="blue")
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
882e69eddb6eabdcbc176639e0e8c874
인쇄 κ°€λŠ₯ν•œ ν‘œν˜„ ExtensionType 은 클래슀 이름과 각 ν•„λ“œμ˜ 값을 ν¬ν•¨ν•˜λŠ” κΈ°λ³Έ 인쇄 κ°€λŠ₯ν•œ ν‘œν˜„ 방법( __repr__
print(MaskedTensor(values=[1, 2, 3], mask=[True, True, False]))
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
b40e0f92426eee71c5188ed4790a4c30
λ“±ν˜Έ μ—°μ‚°μž ExtensionType 은 μœ ν˜•μ΄ λ™μΌν•˜κ³  λͺ¨λ“  ν•„λ“œκ°€ λ™μΌν•œ 경우 두 값을 λ™μΌν•˜κ²Œ κ°„μ£Όν•˜λŠ” κΈ°λ³Έ 동등 μ—°μ‚°μž( __eq__ 및 __ne__ ν…μ„œ ν•„λ“œλŠ” λͺ¨μ–‘이 λ™μΌν•˜κ³  λͺ¨λ“  μš”μ†Œμ— λŒ€ν•΄ μš”μ†Œλ³„λ‘œ λ™μΌν•œ 경우 λ™μΌν•œ κ²ƒμœΌλ‘œ κ°„μ£Όλ©λ‹ˆλ‹€.
a = MaskedTensor([1, 2], [True, False]) b = MaskedTensor([[3, 4], [5, 6]], [[False, True], [True, True]]) print(f"a == a: {a==a}") print(f"a == b: {a==b}") print(f"a == a.values: {a==a.values}")
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
3657bd656ca423337087c46e42ad36e5
μ°Έκ³ : Tensor κ°€ ν¬ν•¨λœ 경우 __eq__ λŠ” (Python λΆ€μšΈ κ°’ λŒ€μ‹ ) Tensor λ°˜ν™˜ν•  수 μžˆμŠ΅λ‹ˆλ‹€. 검증 방법 ExtensionType 은 ν•„λ“œμ— λŒ€ν•œ μœ νš¨μ„± 검사λ₯Ό μˆ˜ν–‰ν•˜κΈ° μœ„ν•΄ μž¬μ •μ˜ν•  수 μžˆλŠ” __validate__ μƒμ„±μžκ°€ 호좜되고 ν•„λ“œκ°€ μœ ν˜• κ²€μ‚¬λ˜κ³  μ„ μ–Έλœ μœ ν˜•μœΌλ‘œ λ³€ν™˜λœ 후에 μ‹€ν–‰λ˜λ―€λ‘œ λͺ¨λ“  ν•„λ“œμ— μ„ μ–Έλœ μœ ν˜•μ΄ μžˆλ‹€κ³  κ°€μ •ν•  수 μžˆμŠ΅λ‹ˆλ‹€. λ‹€μŒ μ˜ˆμ œλŠ” MaskedTensor λ₯Ό μ—…λ°μ΄νŠΈν•˜μ—¬ ν•΄λ‹Ή ν•„λ“œμ˜ shape s 및 dtype 을 ν™•μΈν•©λ‹ˆλ‹€.
class MaskedTensor(tf.experimental.ExtensionType): """A tensor paired with a boolean mask, indicating which values are valid.""" values: tf.Tensor mask: tf.Tensor def __validate__(self): self.values.shape.assert_is_compatible_with(self.mask.shape) assert self.mask.dtype.is_bool, 'mask.dtype must be bool' try: MaskedTensor([1, 2, 3], [0, 1, 0]) # wrong dtype for mask. except AssertionError as e: print(f"Got expected AssertionError: {e}") try: MaskedTensor([1, 2, 3], [True, False]) # shapes don't match. except ValueError as e: print(f"Got expected ValueError: {e}")
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
2fd7173dd86fadf96e427f2f2756801d
κ°•μ œ λΆˆλ³€μ„± ExtensionType __setattr__ 및 __delattr__ λ©”μ„œλ“œλ₯Ό μž¬μ •μ˜ν•˜μ—¬ λ³€ν˜•μ„ λ°©μ§€ν•˜μ—¬ ν™•μž₯ μœ ν˜• 값을 λ³€κ²½ν•  수 없도둝 ν•©λ‹ˆλ‹€.
mt = MaskedTensor([1, 2, 3], [True, False, True]) try: mt.mask = [True, True, True] except AttributeError as e: print(f"Got expected AttributeError: {e}") try: mt.mask[0] = False except TypeError as e: print(f"Got expected TypeError: {e}") try: del mt.mask except AttributeError as e: print(f"Got expected AttributeError: {e}")
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
6d782ad3ff38c9987e9482f0999488d2
μ€‘μ²©λœ μœ ν˜• 사양 각 ExtensionType ν΄λž˜μŠ€μ—λŠ” μžλ™μœΌλ‘œ μƒμ„±λ˜κ³  &lt;extension_type_name&gt;.Spec TypeSpec ν΄λž˜μŠ€κ°€ μžˆμŠ΅λ‹ˆλ‹€. 이 ν΄λž˜μŠ€λŠ” μ€‘μ²©λœ ν…μ„œμ˜ 값을 μ œμ™Έν•œ κ°’μ—μ„œ λͺ¨λ“  정보λ₯Ό μΊ‘μ²˜ν•©λ‹ˆλ‹€. 특히 TypeSpec 은 μ€‘μ²©λœ Tensor, ExtensionType λ˜λŠ” CompositeTensorλ₯Ό TypeSpec 으둜 λŒ€μ²΄ν•˜μ—¬ μƒμ„±λ©λ‹ˆλ‹€.
class Player(tf.experimental.ExtensionType): name: tf.Tensor attributes: Mapping[str, tf.Tensor] anne = Player("Anne", {"height": 8.3, "speed": 28.1}) anne_spec = tf.type_spec_from_value(anne) print(anne_spec.name) # Records dtype and shape, but not the string value. print(anne_spec.attributes) # Records keys and TensorSpecs for values.
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
094cf538674bf22d72056319d3ca791f
TypeSpec 값은 λͺ…μ‹œμ μœΌλ‘œ κ΅¬μ„±ν•˜κ±°λ‚˜ tf.type_spec_from_value μ‚¬μš©ν•˜μ—¬ ExtensionType κ°’μ—μ„œ λΉŒλ“œν•  수 μžˆμŠ΅λ‹ˆλ‹€.
spec1 = Player.Spec(name=tf.TensorSpec([], tf.float32), attributes={}) spec2 = tf.type_spec_from_value(anne)
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
fc5b147bffe0c440efab23c5b4546295
TypeSpec 은 TensorFlowμ—μ„œ 값을 정적 ꡬ성 μš”μ†Œ 와 동적 ꡬ성 μš”μ†Œλ‘œ λ‚˜λˆ„λŠ” 데 μ‚¬μš©λ©λ‹ˆλ‹€. κ·Έλž˜ν”„ 생성 μ‹œ κ³ μ •λ˜λŠ” 정적 ꡬ성 μš”μ†Œ tf.TypeSpec μΈμ½”λ”©λ©λ‹ˆλ‹€. κ·Έλž˜ν”„κ°€ 싀행될 λ•Œλ§ˆλ‹€ λ‹€λ₯Ό 수 μžˆλŠ” 동적 ꡬ성 μš”μ†Œ tf.Tensor λͺ©λ‘μœΌλ‘œ μΈμ½”λ”©λ©λ‹ˆλ‹€. 예λ₯Ό λ“€μ–΄, tf.function은 μΈμˆ˜μ— 이전에 λ³Ό 수 μ—†λŠ” TypeSpec tf.function .
@tf.function def anonymize_player(player): print("<<TRACING>>") return Player("<anonymous>", player.attributes) # Function gets traced (first time the function has been called): anonymize_player(Player("Anne", {"height": 8.3, "speed": 28.1})) # Function does NOT get traced (same TypeSpec: just tensor values changed) anonymize_player(Player("Bart", {"height": 8.1, "speed": 25.3})) # Function gets traced (new TypeSpec: keys for attributes changed): anonymize_player(Player("Chuck", {"height": 11.0, "jump": 5.3}))
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
6b9fbef3eddbcd5f667fa81aeab466f6
μžμ„Έν•œ λ‚΄μš©μ€ tf.function κ°€μ΄λ“œλ₯Ό μ°Έμ‘°ν•˜μ‹­μ‹œμ˜€. ExtensionType μ‚¬μš©μž μ •μ˜ λ‹¨μˆœνžˆ ν•„λ“œμ™€ ν•΄λ‹Ή μœ ν˜•μ„ μ„ μ–Έν•˜λŠ” 것 외에도 ν™•μž₯ μœ ν˜•μ€ λ‹€μŒμ„ μˆ˜ν–‰ν•  수 μžˆμŠ΅λ‹ˆλ‹€. κΈ°λ³Έ 인쇄 κ°€λŠ₯ν•œ ν‘œν˜„( __repr__ )을 μž¬μ •μ˜ν•©λ‹ˆλ‹€. 방법을 μ •μ˜ν•©λ‹ˆλ‹€. 클래슀 λ©”μ„œλ“œμ™€ 정적 λ©”μ„œλ“œλ₯Ό μ •μ˜ν•©λ‹ˆλ‹€. 속성을 μ •μ˜ν•©λ‹ˆλ‹€. κΈ°λ³Έ μƒμ„±μž( __init__ )λ₯Ό μž¬μ •μ˜ν•©λ‹ˆλ‹€. κΈ°λ³Έ ν•­λ“± μ—°μ‚°μž( __eq__ )λ₯Ό μž¬μ •μ˜ν•©λ‹ˆλ‹€. μ—°μ‚°μžλ₯Ό μ •μ˜ν•©λ‹ˆλ‹€(예: __add__ 및 __lt__ ). ν•„λ“œμ˜ 기본값을 μ„ μ–Έν•©λ‹ˆλ‹€. ν•˜μœ„ 클래슀λ₯Ό μ •μ˜ν•©λ‹ˆλ‹€. κΈ°λ³Έ 인쇄 κ°€λŠ₯ν•œ ν‘œν˜„ μž¬μ •μ˜ ν™•μž₯ μœ ν˜•μ— λŒ€ν•΄ 이 κΈ°λ³Έ λ¬Έμžμ—΄ λ³€ν™˜ μ—°μ‚°μžλ₯Ό μž¬μ •μ˜ν•  수 μžˆμŠ΅λ‹ˆλ‹€. λ‹€μŒ μ˜ˆμ œμ—μ„œλŠ” 값이 Eager λͺ¨λ“œμ—μ„œ 인쇄될 λ•Œ 더 읽기 μ‰¬μš΄ λ¬Έμžμ—΄ ν‘œν˜„μ„ 생성 MaskedTensor
class MaskedTensor(tf.experimental.ExtensionType): """A tensor paired with a boolean mask, indicating which values are valid.""" values: tf.Tensor mask: tf.Tensor # shape=values.shape; false for invalid values. def __repr__(self): return masked_tensor_str(self.values, self.mask) def masked_tensor_str(values, mask): if isinstance(values, tf.Tensor): if hasattr(values, 'numpy') and hasattr(mask, 'numpy'): return f'<MaskedTensor {masked_tensor_str(values.numpy(), mask.numpy())}>' else: return f'MaskedTensor(values={values}, mask={mask})' if len(values.shape) == 1: items = [repr(v) if m else '_' for (v, m) in zip(values, mask)] else: items = [masked_tensor_str(v, m) for (v, m) in zip(values, mask)] return '[%s]' % ', '.join(items) mt = MaskedTensor(values=[[1, 2, 3], [4, 5, 6]], mask=[[True, True, False], [True, False, True]]) print(mt)
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
39f7827f01b937956b2f4a747a039647
λ©”μ†Œλ“œ μ •μ˜ ν™•μž₯ μœ ν˜•μ€ 일반 Python ν΄λž˜μŠ€μ™€ λ§ˆμ°¬κ°€μ§€λ‘œ λ©”μ„œλ“œλ₯Ό μ •μ˜ν•  수 μžˆμŠ΅λ‹ˆλ‹€. 예λ₯Ό λ“€μ–΄ MaskedTensor default λŒ€μ²΄λœ λ§ˆμŠ€ν‚Ήλœ κ°’ self 의 볡사본을 λ°˜ν™˜ν•˜λŠ” with_default λ©”μ„œλ“œλ₯Ό μ •μ˜ν•  수 μžˆμŠ΅λ‹ˆλ‹€. @tf.function λ°μ½”λ ˆμ΄ν„°λ‘œ 주석을 달 수 μžˆμŠ΅λ‹ˆλ‹€.
class MaskedTensor(tf.experimental.ExtensionType): values: tf.Tensor mask: tf.Tensor def with_default(self, default): return tf.where(self.mask, self.values, default) MaskedTensor([1, 2, 3], [True, False, True]).with_default(0)
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
f602c9035325db46fc507d1b7c10e507
클래슀 λ©”μ„œλ“œ 및 정적 λ©”μ„œλ“œ μ •μ˜ @classmethod 및 @staticmethod λ°μ½”λ ˆμ΄ν„°λ₯Ό μ‚¬μš©ν•˜μ—¬ λ©”μ†Œλ“œλ₯Ό μ •μ˜ν•  수 μžˆμŠ΅λ‹ˆλ‹€. 예λ₯Ό λ“€μ–΄ MaskedTensor μœ ν˜•μ€ 주어진 κ°’μœΌλ‘œ λͺ¨λ“  μš”μ†Œλ₯Ό λ§ˆμŠ€ν‚Ήν•˜λŠ” νŒ©ν† λ¦¬ λ©”μ†Œλ“œλ₯Ό μ •μ˜ν•  수 μžˆμŠ΅λ‹ˆλ‹€.
class MaskedTensor(tf.experimental.ExtensionType): values: tf.Tensor mask: tf.Tensor def __repr__(self): return masked_tensor_str(self.values, self.mask) @staticmethod def from_tensor_and_value_to_mask(values, value_to_mask): return MaskedTensor(values, values == value_to_mask) x = tf.constant([[1, 0, 2], [3, 0, 0]]) MaskedTensor.from_tensor_and_value_to_mask(x, 0)
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
3ddbba144651725495c2ae81dc0d118e
속성 μ •μ˜ ν™•μž₯ μœ ν˜•μ€ 일반 Python ν΄λž˜μŠ€μ™€ λ§ˆμ°¬κ°€μ§€λ‘œ @property λ°μ½”λ ˆμ΄ν„°λ₯Ό μ‚¬μš©ν•˜μ—¬ 속성을 μ •μ˜ν•  수 μžˆμŠ΅λ‹ˆλ‹€. 예λ₯Ό λ“€μ–΄ MaskedTensor μœ ν˜•μ€ κ°’μ˜ dtype에 λŒ€ν•œ 약칭인 dtype 속성을 μ •μ˜ν•  수 μžˆμŠ΅λ‹ˆλ‹€.
class MaskedTensor(tf.experimental.ExtensionType): values: tf.Tensor mask: tf.Tensor @property def dtype(self): return self.values.dtype MaskedTensor([1, 2, 3], [True, False, True]).dtype
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
a363cc69b424c1cf43324a6037fa2348
κΈ°λ³Έ μƒμ„±μž μž¬μ •μ˜ ν™•μž₯ μœ ν˜•μ— λŒ€ν•œ κΈ°λ³Έ μƒμ„±μžλ₯Ό μž¬μ •μ˜ν•  수 μžˆμŠ΅λ‹ˆλ‹€. μ‚¬μš©μž μ •μ˜ μƒμ„±μžλŠ” μ„ μ–Έλœ λͺ¨λ“  ν•„λ“œμ— λŒ€ν•΄ 값을 μ„€μ •ν•΄μ•Ό ν•©λ‹ˆλ‹€. μ‚¬μš©μž μ •μ˜ μƒμ„±μžκ°€ λ°˜ν™˜λœ ν›„ λͺ¨λ“  ν•„λ“œκ°€ μœ ν˜• κ²€μ‚¬λ˜κ³  μœ„μ—μ„œ μ„€λͺ…ν•œ λŒ€λ‘œ 값이 λ³€ν™˜λ©λ‹ˆλ‹€.
class Toy(tf.experimental.ExtensionType): name: str price: tf.Tensor def __init__(self, name, price, discount=0): self.name = name self.price = price * (1 - discount) print(Toy("ball", 5.0, discount=0.2)) # On sale -- 20% off!
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
0dbaae94605a6e6e9e2cf185c1fcdf24
λ˜λŠ” κΈ°λ³Έ μƒμ„±μžλ₯Ό κ·ΈλŒ€λ‘œ 두고 ν•˜λ‚˜ μ΄μƒμ˜ νŒ©ν† λ¦¬ λ©”μ†Œλ“œλ₯Ό μΆ”κ°€ν•˜λŠ” 것을 κ³ λ €ν•  수 μžˆμŠ΅λ‹ˆλ‹€. 예:
class Toy(tf.experimental.ExtensionType): name: str price: tf.Tensor @staticmethod def new_toy_with_discount(name, price, discount): return Toy(name, price * (1 - discount)) print(Toy.new_toy_with_discount("ball", 5.0, discount=0.2))
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
3240e7a4a42d5a6170bce30093892028
κΈ°λ³Έ ν•­λ“± μ—°μ‚°μž μž¬μ •μ˜( __eq__ ) ν™•μž₯ μœ ν˜•μ— λŒ€ν•œ __eq__ μ—°μ‚°μžλ₯Ό μž¬μ •μ˜ν•  수 μžˆμŠ΅λ‹ˆλ‹€. λ‹€μŒ μ˜ˆμ œμ—μ„œλŠ” MaskedTensor 비ꡐ할 λ•Œ 마슀크된 μš”μ†Œλ₯Ό λ¬΄μ‹œν•˜λ„λ‘ MaskedTensorλ₯Ό μ—…λ°μ΄νŠΈν•©λ‹ˆλ‹€.
class MaskedTensor(tf.experimental.ExtensionType): values: tf.Tensor mask: tf.Tensor def __repr__(self): return masked_tensor_str(self.values, self.mask) def __eq__(self, other): result = tf.math.equal(self.values, other.values) result = result | ~(self.mask & other.mask) return tf.reduce_all(result) x = MaskedTensor([1, 2, 3, 4], [True, True, False, True]) y = MaskedTensor([5, 2, 0, 4], [False, True, False, True]) print(x == y)
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
12804ecc225a4527e4d50daf0202f4ef
μ°Έκ³ : κΈ°λ³Έ κ΅¬ν˜„μ€ λ‹¨μˆœνžˆ __eq__ λ₯Ό ν˜ΈμΆœν•˜κ³  κ²°κ³Όλ₯Ό λ¬΄νš¨ν™”ν•˜κΈ° __ne__ λ₯Ό μž¬μ •μ˜ν•  ν•„μš”κ°€ μ—†μŠ΅λ‹ˆλ‹€. μ •λ°©ν–₯ μ°Έμ‘° μ‚¬μš© ν•„λ“œ μœ ν˜•μ΄ 아직 μ •μ˜λ˜μ§€ μ•Šμ€ 경우 μœ ν˜• 이름이 ν¬ν•¨λœ λ¬Έμžμ—΄μ„ λŒ€μ‹  μ‚¬μš©ν•  수 μžˆμŠ΅λ‹ˆλ‹€. λ‹€μŒ μ˜ˆμ œμ—μ„œλŠ” Node μœ ν˜•μ΄ 아직 (μ™„μ „νžˆ) μ •μ˜λ˜μ§€ μ•Šμ•˜κΈ° λ•Œλ¬Έμ— "Node" children ν•„λ“œμ— 주석을 λ‹€λŠ” 데 μ‚¬μš©λ©λ‹ˆλ‹€.
class Node(tf.experimental.ExtensionType): value: tf.Tensor children: Tuple["Node", ...] = () Node(3, [Node(5), Node(2)])
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
cb681a320cede6912e473abd324a224f
μ„œλΈŒν΄λž˜μŠ€ μ •μ˜ ν™•μž₯ μœ ν˜•μ€ ν‘œμ€€ Python ꡬ문을 μ‚¬μš©ν•˜μ—¬ ν•˜μœ„ λΆ„λ₯˜λ  수 μžˆμŠ΅λ‹ˆλ‹€. ν™•μž₯ μœ ν˜• ν•˜μœ„ ν΄λž˜μŠ€λŠ” μƒˆ ν•„λ“œ, λ©”μ„œλ“œ 및 속성을 μΆ”κ°€ν•  수 μžˆμŠ΅λ‹ˆλ‹€. μƒμ„±μž, 인쇄 κ°€λŠ₯ν•œ ν‘œν˜„ 및 λ“±ν˜Έ μ—°μ‚°μžλ₯Ό μž¬μ •μ˜ν•  수 μžˆμŠ΅λ‹ˆλ‹€. λ‹€μŒ μ˜ˆμ œλŠ” μ„Έ 개의 Tensor ν•„λ“œλ₯Ό μ‚¬μš©ν•˜μ—¬ λ…Έλ“œ μ‚¬μ΄μ˜ κ°€μž₯자리 집합을 μΈμ½”λ”©ν•˜λŠ” TensorGraph 그런 λ‹€μŒ 각 λ…Έλ“œμ— λŒ€ν•œ "κΈ°λŠ₯ κ°’"을 κΈ°λ‘ν•˜κΈ° μœ„ν•΄ Tensor ν•„λ“œλ₯Ό μΆ”κ°€ν•˜λŠ” ν•˜μœ„ 클래슀λ₯Ό μ •μ˜ν•©λ‹ˆλ‹€. λ˜ν•œ ν•˜μœ„ ν΄λž˜μŠ€λŠ” κ°€μž₯자리λ₯Ό 따라 νŠΉμ„± 값을 μ „νŒŒν•˜λŠ” 방법을 μ •μ˜ν•©λ‹ˆλ‹€.
class TensorGraph(tf.experimental.ExtensionType): num_nodes: tf.Tensor edge_src: tf.Tensor # edge_src[e] = index of src node for edge e. edge_dst: tf.Tensor # edge_dst[e] = index of dst node for edge e. class TensorGraphWithNodeFeature(TensorGraph): node_features: tf.Tensor # node_features[n] = feature value for node n. def propagate_features(self, weight=1.0) -> 'TensorGraphWithNodeFeature': updates = tf.gather(self.node_features, self.edge_src) * weight new_node_features = tf.tensor_scatter_nd_add( self.node_features, tf.expand_dims(self.edge_dst, 1), updates) return TensorGraphWithNodeFeature( self.num_nodes, self.edge_src, self.edge_dst, new_node_features) g = TensorGraphWithNodeFeature( # Edges: 0->1, 4->3, 2->2, 2->1 num_nodes=5, edge_src=[0, 4, 2, 2], edge_dst=[1, 3, 2, 1], node_features=[10.0, 0.0, 2.0, 5.0, -1.0, 0.0]) print("Original features:", g.node_features) print("After propagating:", g.propagate_features().node_features)
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
b5fae141c11137008416f645e1f9ac80
개인 ν•„λ“œ μ •μ˜ ν™•μž₯ μœ ν˜•μ˜ ν•„λ“œλŠ” 접두사에 밑쀄을 λΆ™μ—¬ λΉ„κ³΅κ°œλ‘œ ν‘œμ‹œν•  수 μžˆμŠ΅λ‹ˆλ‹€(ν‘œμ€€ Python κ·œμΉ™μ— 따라). 이것은 TensorFlowκ°€ μ–΄λ–€ μ‹μœΌλ‘œλ“  ν•„λ“œλ₯Ό μ²˜λ¦¬ν•˜λŠ” 방식에 영ν–₯을 λ―ΈμΉ˜μ§€ μ•ŠμŠ΅λ‹ˆλ‹€. κ·ΈλŸ¬λ‚˜ λ‹¨μˆœνžˆ ν™•μž₯ μœ ν˜•μ˜ λͺ¨λ“  μ‚¬μš©μžμ—κ²Œ ν•΄λ‹Ή ν•„λ“œκ°€ λΉ„κ³΅κ°œλΌλŠ” μ‹ ν˜Έ 역할을 ν•©λ‹ˆλ‹€. ExtensionType의 TypeSpec 각 ExtensionType ν΄λž˜μŠ€μ—λŠ” μžλ™μœΌλ‘œ μƒμ„±λ˜κ³  &lt;extension_type_name&gt;.Spec TypeSpec ν΄λž˜μŠ€κ°€ μžˆμŠ΅λ‹ˆλ‹€. μžμ„Έν•œ λ‚΄μš©μ€ μœ„μ˜ "μ€‘μ²©λœ TypeSpec" μ„Ήμ…˜μ„ μ°Έμ‘°ν•˜μ„Έμš”. TypeSpec 을 μ‚¬μš©μž μ •μ˜ν•˜λ €λ©΄ Spec μ΄λΌλŠ” 자체 쀑첩 클래슀λ₯Ό μ •μ˜ν•˜κΈ°λ§Œ ν•˜λ©΄ ExtensionType 이 이λ₯Ό μžλ™μœΌλ‘œ μƒμ„±λœ TypeSpec 의 기초둜 μ‚¬μš©ν•©λ‹ˆλ‹€. Spec 클래슀λ₯Ό μ‚¬μš©μž μ •μ˜ν•  수 μžˆμŠ΅λ‹ˆλ‹€. κΈ°λ³Έ 인쇄 κ°€λŠ₯ν•œ ν‘œν˜„μ„ μž¬μ •μ˜ν•©λ‹ˆλ‹€. κΈ°λ³Έ μƒμ„±μžλ₯Ό μž¬μ •μ˜ν•©λ‹ˆλ‹€. λ©”μ„œλ“œ, 클래슀 λ©”μ„œλ“œ, 정적 λ©”μ„œλ“œ 및 속성을 μ •μ˜ν•©λ‹ˆλ‹€. λ‹€μŒ μ˜ˆμ œμ—μ„œλŠ” μ‚¬μš©ν•˜κΈ° 쉽도둝 MaskedTensor.Spec 클래슀λ₯Ό μ‚¬μš©μž μ§€μ •ν•©λ‹ˆλ‹€.
class MaskedTensor(tf.experimental.ExtensionType): values: tf.Tensor mask: tf.Tensor shape = property(lambda self: self.values.shape) dtype = property(lambda self: self.values.dtype) def __repr__(self): return masked_tensor_str(self.values, self.mask) def with_values(self, new_values): return MaskedTensor(new_values, self.mask) class Spec: def __init__(self, shape, dtype=tf.float32): self.values = tf.TensorSpec(shape, dtype) self.mask = tf.TensorSpec(shape, tf.bool) def __repr__(self): return f"MaskedTensor.Spec(shape={self.shape}, dtype={self.dtype})" shape = property(lambda self: self.values.shape) dtype = property(lambda self: self.values.dtype)
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
3730bc723d46fd4157f5d18150c5b7ed
μ°Έκ³  : μ‚¬μš©μž μ •μ˜ Spec ExtensionType μ„ μ–Έλ˜μ§€ μ•Šμ€ μΈμŠ€ν„΄μŠ€ λ³€μˆ˜λ₯Ό μ‚¬μš©ν•  수 μ—†μŠ΅λ‹ˆλ‹€. ν…μ„œ API λ””μŠ€νŒ¨μΉ˜ tf.Tensor μœ ν˜•μ— μ˜ν•΄ μ •μ˜λœ μΈν„°νŽ˜μ΄μŠ€λ₯Ό μ „λ¬Έν™”ν•˜κ±°λ‚˜ ν™•μž₯ν•œλ‹€λŠ” μ μ—μ„œ "ν…μ„œμ™€ μœ μ‚¬"ν•  수 μžˆμŠ΅λ‹ˆλ‹€. ν…μ„œμ™€ μœ μ‚¬ν•œ ν™•μž₯ μœ ν˜•μ˜ μ˜ˆλ‘œλŠ” RaggedTensor , SparseTensor 및 MaskedTensor μžˆμŠ΅λ‹ˆλ‹€. λ””μŠ€νŒ¨μΉ˜ λ°μ½”λ ˆμ΄ν„° λŠ” ν…μ„œμ™€ μœ μ‚¬ν•œ ν™•μž₯ μœ ν˜•μ— 적용될 λ•Œ TensorFlow μž‘μ—…μ˜ κΈ°λ³Έ λ™μž‘μ„ μž¬μ •μ˜ν•˜λŠ” 데 μ‚¬μš©ν•  수 μžˆμŠ΅λ‹ˆλ‹€. TensorFlowλŠ” ν˜„μž¬ μ„Έ 가지 λ””μŠ€νŒ¨μΉ˜ λ°μ½”λ ˆμ΄ν„°λ₯Ό μ •μ˜ν•©λ‹ˆλ‹€. @tf.experimental.dispatch_for_api(tf_api) @tf.experimental.dispatch_for_unary_elementwise_api(x_type) @tf.experimental.dispatch_for_binary_elementwise_apis(x_type, y_type) 단일 API에 λŒ€ν•œ λ””μŠ€νŒ¨μΉ˜ tf.experimental.dispatch_for_api λ°μ½”λ ˆμ΄ν„°λŠ” μ§€μ •λœ μ„œλͺ…μœΌλ‘œ 호좜될 λ•Œ μ§€μ •λœ TensorFlow μž‘μ—…μ˜ κΈ°λ³Έ λ™μž‘μ„ μž¬μ •μ˜ν•©λ‹ˆλ‹€. 예λ₯Ό λ“€μ–΄ 이 λ°μ½”λ ˆμ΄ν„°λ₯Ό μ‚¬μš©ν•˜μ—¬ tf.stack 이 MaskedTensor 값을 μ²˜λ¦¬ν•˜λŠ” 방법을 지정할 수 μžˆμŠ΅λ‹ˆλ‹€.
@tf.experimental.dispatch_for_api(tf.stack) def masked_stack(values: List[MaskedTensor], axis = 0): return MaskedTensor(tf.stack([v.values for v in values], axis), tf.stack([v.mask for v in values], axis))
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
64de0acfd306952c47b636a792d4f524
MaskedTensor κ°’ λͺ©λ‘κ³Ό ν•¨κ»˜ 호좜될 λ•Œλ§ˆλ‹€ tf.stack λŒ€ν•œ κΈ°λ³Έ κ΅¬ν˜„μ„ μž¬μ •μ˜ values typing.List[MaskedTensor] μ£Όμ„μœΌλ‘œ μ§€μ •λ˜μ–΄ 있기 λ•Œλ¬Έμž…λ‹ˆλ‹€):
x = MaskedTensor([1, 2, 3], [True, True, False]) y = MaskedTensor([4, 5, 6], [False, True, True]) tf.stack([x, y])
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
dcafafb56a3e02388d6b31254078abd6
tf.stack 이 ν˜Όν•©λœ MaskedTensor 및 Tensor κ°’ λͺ©λ‘μ„ μ²˜λ¦¬ν•  수 μžˆλ„λ‘ ν•˜λ €λ©΄ values λ§€κ°œλ³€μˆ˜μ— λŒ€ν•œ μœ ν˜• 주석을 κ΅¬μ²΄ν™”ν•˜κ³  ν•¨μˆ˜ 본문을 μ μ ˆν•˜κ²Œ μ—…λ°μ΄νŠΈν•  수 μžˆμŠ΅λ‹ˆλ‹€.
tf.experimental.unregister_dispatch_for(masked_stack) def convert_to_masked_tensor(x): if isinstance(x, MaskedTensor): return x else: return MaskedTensor(x, tf.ones_like(x, tf.bool)) @tf.experimental.dispatch_for_api(tf.stack) def masked_stack_v2(values: List[Union[MaskedTensor, tf.Tensor]], axis = 0): values = [convert_to_masked_tensor(v) for v in values] return MaskedTensor(tf.stack([v.values for v in values], axis), tf.stack([v.mask for v in values], axis)) x = MaskedTensor([1, 2, 3], [True, True, False]) y = tf.constant([4, 5, 6]) tf.stack([x, y, x])
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
353cd872d0bbc46ac9c6e2714c0c6894
μž¬μ •μ˜ν•  수 μžˆλŠ” API λͺ©λ‘μ€ tf.experimental.dispatch_for_api λŒ€ν•œ API μ„€λͺ…μ„œλ₯Ό μ°Έμ‘°ν•˜μ„Έμš”. λͺ¨λ“  단항 μš”μ†Œλ³„ API에 λŒ€ν•œ λ””μŠ€νŒ¨μΉ˜ tf.experimental.dispatch_for_unary_elementwise_apis λ°μ½”λ ˆμ΄ν„°λŠ” 첫 번째 인수(일반적으둜 이름이 x )에 λŒ€ν•œ 값이 μœ ν˜• 주석 x_type κ³Ό μΌμΉ˜ν•  λ•Œλ§ˆλ‹€ λͺ¨λ“  단항 μš”μ†Œλ³„ μ—°μ‚°(예: tf.math.cos )의 κΈ°λ³Έ λ™μž‘μ„ μž¬μ •μ˜ν•©λ‹ˆλ‹€. λ°μ½”λ ˆμ΄νŒ…λœ ν•¨μˆ˜λŠ” 두 개의 인수λ₯Ό μ·¨ν•΄μ•Ό ν•©λ‹ˆλ‹€. api_func : 단일 λ§€κ°œλ³€μˆ˜λ₯Ό μ·¨ν•˜κ³  μš”μ†Œλ³„ 연산을 μˆ˜ν–‰ν•˜λŠ” ν•¨μˆ˜(예: tf.abs ). x : μš”μ†Œλ³„ μ—°μ‚°μ˜ 첫 번째 μΈμˆ˜μž…λ‹ˆλ‹€. MaskedTensor μœ ν˜•μ„ μ²˜λ¦¬ν•˜κΈ° μœ„ν•΄ λͺ¨λ“  단항 μš”μ†Œλ³„ 연산을 μ—…λ°μ΄νŠΈν•©λ‹ˆλ‹€.
@tf.experimental.dispatch_for_unary_elementwise_apis(MaskedTensor) def masked_tensor_unary_elementwise_api_handler(api_func, x): return MaskedTensor(api_func(x.values), x.mask)
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
6111e366ca3e32b869af3c6a6d32e7fe
MaskedTensor μ—μ„œ 단항 μš”μ†Œλ³„ 연산이 호좜될 λ•Œλ§ˆλ‹€ μ‚¬μš©λ©λ‹ˆλ‹€.
x = MaskedTensor([1, -2, -3], [True, False, True]) print(tf.abs(x)) print(tf.ones_like(x, dtype=tf.float32))
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
f0d76b9e5ea87bd5bf026e8bbe7624f8
λ°”μ΄λ„ˆλ¦¬ λͺ¨λ“  μš”μ†Œλ³„ API에 λŒ€ν•œ λ””μŠ€νŒ¨μΉ˜ λ§ˆμ°¬κ°€μ§€λ‘œ tf.experimental.dispatch_for_binary_elementwise_apis MaskedTensor μœ ν˜•μ„ μ²˜λ¦¬ν•˜κΈ° μœ„ν•΄ λͺ¨λ“  λ°”μ΄λ„ˆλ¦¬ μš”μ†Œλ³„ 연산을 μ—…λ°μ΄νŠΈν•˜λŠ” 데 μ‚¬μš©ν•  수 μžˆμŠ΅λ‹ˆλ‹€.
@tf.experimental.dispatch_for_binary_elementwise_apis(MaskedTensor, MaskedTensor) def masked_tensor_binary_elementwise_api_handler(api_func, x, y): return MaskedTensor(api_func(x.values, y.values), x.mask & y.mask) x = MaskedTensor([1, -2, -3], [True, False, True]) y = MaskedTensor([[4], [5]], [[True], [False]]) tf.math.add(x, y)
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
aa4f6b6316221929ee8ecdc6ef7675d8
μž¬μ •μ˜λ˜λŠ” μš”μ†Œλ³„ API λͺ©λ‘μ€ tf.experimental.dispatch_for_unary_elementwise_apis 및 tf.experimental.dispatch_for_binary_elementwise_apis λŒ€ν•œ API λ¬Έμ„œλ₯Ό μ°Έμ‘°ν•˜μ„Έμš”. 일괄 처리 κ°€λŠ₯ν•œ ν™•μž₯ μœ ν˜• ExtensionType 단일 μΈμŠ€ν„΄μŠ€ κ°’μ˜ 배치λ₯Ό λ‚˜νƒ€λ‚΄λŠ” 데 μ‚¬μš©ν•  μˆ˜μžˆλŠ” 경우 batchable이닀. Tensor 배치 차원을 μΆ”κ°€ν•˜μ—¬ μˆ˜ν–‰λ©λ‹ˆλ‹€. λ‹€μŒ TensorFlow APIλ₯Ό μ‚¬μš©ν•˜λ €λ©΄ λͺ¨λ“  ν™•μž₯ μœ ν˜• μž…λ ₯이 일괄 처리 κ°€λŠ₯ν•΄μ•Ό ν•©λ‹ˆλ‹€. tf.data.Dataset ( batch , unbatch , from_tensor_slices ) tf.Keras ( fit , evaluate , predict ) tf.map_fn 기본적으둜 BatchableExtensionType Tensor , CompositeTensor 및 ExtensionType 일괄 μ²˜λ¦¬ν•˜μ—¬ 일괄 처리된 값을 μƒμ„±ν•©λ‹ˆλ‹€. 이것이 ν΄λž˜μŠ€μ— μ ν•©ν•˜μ§€ μ•Šμ€ 경우 tf.experimental.ExtensionTypeBatchEncoder λ₯Ό μ‚¬μš©ν•˜μ—¬ 이 κΈ°λ³Έ λ™μž‘μ„ μž¬μ •μ˜ν•΄μ•Ό ν•©λ‹ˆλ‹€. 예λ₯Ό λ“€μ–΄, κ°œλ³„ ν¬μ†Œ ν…μ„œμ˜ values , indices 및 dense_shape tf.SparseTensor κ°’μ˜ 배치λ₯Ό λ§Œλ“œλŠ” 것은 μ μ ˆν•˜μ§€ μ•ŠμŠ΅λ‹ˆλ‹€. λŒ€λΆ€λΆ„μ˜ 경우 μ΄λŸ¬ν•œ ν…μ„œλŠ” ν˜Έν™˜λ˜μ§€ μ•ŠλŠ” λͺ¨μ–‘을 가지고 있기 λ•Œλ¬Έμ— μŠ€νƒν•  수 μ—†μŠ΅λ‹ˆλ‹€. ; κ°€λŠ₯ν•˜λ”λΌλ„ κ²°κ³ΌλŠ” μœ νš¨ν•œ SparseTensor . μ°Έκ³  : BatchableExtensionType tf.stack , tf.concat , tf.slice 등에 λŒ€ν•œ λ””μŠ€νŒ¨μ²˜λ₯Ό μžλ™μœΌλ‘œ μ •μ˜ν•˜μ§€ μ•ŠμŠ΅λ‹ˆλ‹€ . μ΄λŸ¬ν•œ APIμ—μ„œ 클래슀λ₯Ό 지원해야 ν•˜λŠ” 경우 μœ„μ—μ„œ μ„€λͺ…ν•œ λ””μŠ€νŒ¨μΉ˜ λ°μ½”λ ˆμ΄ν„°λ₯Ό μ‚¬μš©ν•˜μ„Έμš”. BatchableExtensionType 예: λ„€νŠΈμ›Œν¬ Network 클래슀λ₯Ό 생각해 λ³΄μ‹­μ‹œμ˜€. 이 ν΄λž˜μŠ€λŠ” 각 λ…Έλ“œμ—μ„œ μˆ˜ν–‰ν•΄μ•Ό ν•  μž‘μ—…μ˜ μ–‘κ³Ό λ…Έλ“œ 간에 μž‘μ—…μ„ μ΄λ™ν•˜λŠ” 데 μ‚¬μš©ν•  수 μžˆλŠ” λŒ€μ—­ν­μ„ μΆ”μ ν•©λ‹ˆλ‹€.
class Network(tf.experimental.ExtensionType): # This version is not batchable. work: tf.Tensor # work[n] = work left to do at node n bandwidth: tf.Tensor # bandwidth[n1, n2] = bandwidth from n1->n2 net1 = Network([5., 3, 8], [[0., 2, 0], [2, 0, 3], [0, 3, 0]]) net2 = Network([3., 4, 2], [[0., 2, 2], [2, 0, 2], [2, 2, 0]])
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
24e590ec6a8ef81d874edb7aa60df188
이 μœ ν˜•μ„ 일괄 처리 κ°€λŠ₯ν•˜κ²Œ λ§Œλ“€λ €λ©΄ κΈ°λ³Έ μœ ν˜•μ„ BatchableExtensionType λ³€κ²½ν•˜κ³  선택적 일괄 처리 차원을 ν¬ν•¨ν•˜λ„λ‘ 각 ν•„λ“œμ˜ λͺ¨μ–‘을 μ‘°μ •ν•©λ‹ˆλ‹€. λ‹€μŒ μ˜ˆμ œμ—μ„œλŠ” 배치 λͺ¨μ–‘을 μΆ”μ ν•˜κΈ° shape ν•„λ“œλ„ μΆ”κ°€ν•©λ‹ˆλ‹€. 이 shape ν•„λ“œλŠ” ν•„μš”λ‘œν•˜μ§€ μ•ŠλŠ” tf.data.Dataset λ˜λŠ” tf.map_fn μžˆμ§€λ§Œ μš”κ΅¬ν•˜λŠ” tf.Keras .
class Network(tf.experimental.BatchableExtensionType): shape: tf.TensorShape # batch shape. A single network has shape=[]. work: tf.Tensor # work[*shape, n] = work left to do at node n bandwidth: tf.Tensor # bandwidth[*shape, n1, n2] = bandwidth from n1->n2 def __init__(self, work, bandwidth): self.work = tf.convert_to_tensor(work) self.bandwidth = tf.convert_to_tensor(bandwidth) work_batch_shape = self.work.shape[:-1] bandwidth_batch_shape = self.bandwidth.shape[:-2] self.shape = work_batch_shape.merge_with(bandwidth_batch_shape) def __repr__(self): return network_repr(self) def network_repr(network): work = network.work bandwidth = network.bandwidth if hasattr(work, 'numpy'): work = ' '.join(str(work.numpy()).split()) if hasattr(bandwidth, 'numpy'): bandwidth = ' '.join(str(bandwidth.numpy()).split()) return (f"<Network shape={network.shape} work={work} bandwidth={bandwidth}>") net1 = Network([5., 3, 8], [[0., 2, 0], [2, 0, 3], [0, 3, 0]]) net2 = Network([3., 4, 2], [[0., 2, 2], [2, 0, 2], [2, 2, 0]]) batch_of_networks = Network( work=tf.stack([net1.work, net2.work]), bandwidth=tf.stack([net1.bandwidth, net2.bandwidth])) print(f"net1={net1}") print(f"net2={net2}") print(f"batch={batch_of_networks}")
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
97ec4e2441e650ae15650e5b4281349c
tf.data.Dataset 을 μ‚¬μš©ν•˜μ—¬ λ„€νŠΈμ›Œν¬ 배치λ₯Ό λ°˜λ³΅ν•  수 μžˆμŠ΅λ‹ˆλ‹€.
dataset = tf.data.Dataset.from_tensor_slices(batch_of_networks) for i, network in enumerate(dataset): print(f"Batch element {i}: {network}")
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
80dc35c8bbb52fad9695dbea4d0dab20
map_fn 을 μ‚¬μš©ν•˜μ—¬ 각 배치 μš”μ†Œμ— ν•¨μˆ˜λ₯Ό μ μš©ν•  μˆ˜λ„ μžˆμŠ΅λ‹ˆλ‹€.
def balance_work_greedy(network): delta = (tf.expand_dims(network.work, -1) - tf.expand_dims(network.work, -2)) delta /= 4 delta = tf.maximum(tf.minimum(delta, network.bandwidth), -network.bandwidth) new_work = network.work + tf.reduce_sum(delta, -1) return Network(new_work, network.bandwidth) tf.map_fn(balance_work_greedy, batch_of_networks)
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
9a33db2691085ab72cddab5ed96c1580
ExtensionTypesλ₯Ό μ§€μ›ν•˜λŠ” TensorFlow API @tf.function tf.function 은 TensorFlow μ½”λ“œμ˜ μ„±λŠ₯을 크게 ν–₯μƒμ‹œν‚¬ 수 μžˆλŠ” Python ν•¨μˆ˜μš© TensorFlow κ·Έλž˜ν”„λ₯Ό 미리 κ³„μ‚°ν•˜λŠ” λ°μ½”λ ˆμ΄ν„°μž…λ‹ˆλ‹€. @tf.function ν•¨μˆ˜μ™€ ν•¨κ»˜ 투λͺ…ν•˜κ²Œ μ‚¬μš©ν•  수 μžˆμŠ΅λ‹ˆλ‹€.
class Pastry(tf.experimental.ExtensionType): sweetness: tf.Tensor # 2d embedding that encodes sweetness chewiness: tf.Tensor # 2d embedding that encodes chewiness @tf.function def combine_pastry_features(x: Pastry): return (x.sweetness + x.chewiness) / 2 cookie = Pastry(sweetness=[1.2, 0.4], chewiness=[0.8, 0.2]) combine_pastry_features(cookie)
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
63d63fc09c92dbe4e0d08d16452025ba
input_signature λŒ€ν•΄ tf.function λ₯Ό λͺ…μ‹œμ μœΌλ‘œ 지정 TypeSpec μ‚¬μš©ν•˜μ—¬ 지정할 수 μžˆμŠ΅λ‹ˆλ‹€.
pastry_spec = Pastry.Spec(tf.TensorSpec([2]), tf.TensorSpec(2)) @tf.function(input_signature=[pastry_spec]) def increase_sweetness(x: Pastry, delta=1.0): return Pastry(x.sweetness + delta, x.chewiness) increase_sweetness(cookie)
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
044bd09ebce5a9429eed4808b3e6754c
ꡬ체적인 κΈ°λŠ₯ tf.function μ˜ν•΄ κ΅¬μΆ•λœ κ°œλ³„ 좔적 κ·Έλž˜ν”„λ₯Ό μΊ‘μŠν™”ν•©λ‹ˆλ‹€. ν™•μž₯ μœ ν˜•μ€ ꡬ체적인 κΈ°λŠ₯κ³Ό ν•¨κ»˜ 투λͺ…ν•˜κ²Œ μ‚¬μš©ν•  수 μžˆμŠ΅λ‹ˆλ‹€.
cf = combine_pastry_features.get_concrete_function(pastry_spec) cf(cookie)
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
f37627e21e70385efc417fe15cc7cee3
μ œμ–΄ 흐름 μž‘μ—… ν™•μž₯ μœ ν˜•μ€ TensorFlow의 μ œμ–΄ 흐름 μž‘μ—…μ—μ„œ μ§€μ›λ©λ‹ˆλ‹€. tf.cond tf.case tf.while_loop tf.identity
# Example: using tf.cond to select between two MaskedTensors. Note that the # two MaskedTensors don't need to have the same shape. a = MaskedTensor([1., 2, 3], [True, False, True]) b = MaskedTensor([22., 33, 108, 55], [True, True, True, False]) condition = tf.constant(True) print(tf.cond(condition, lambda: a, lambda: b)) # Example: using tf.while_loop with MaskedTensor. cond = lambda i, _: i < 10 def body(i, mt): return i + 1, mt.with_values(mt.values + 3 / 7) print(tf.while_loop(cond, body, [0, b])[1])
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
c83bcc13b45601cdf9260bacfaebaec4
사인 μ œμ–΄ 흐름 ν™•μž₯ μœ ν˜•μ€ tf.function의 μ œμ–΄ 흐름 λ¬Έμ—μ„œλ„ μ§€μ›λ©λ‹ˆλ‹€(autograph μ‚¬μš©). λ‹€μŒ μ˜ˆμ—μ„œ if λ¬Έκ³Ό for 문은 ν™•μž₯ μœ ν˜•μ„ 지원 tf.cond 및 tf.while_loop μž‘μ—…μœΌλ‘œ μžλ™ λ³€ν™˜λ©λ‹ˆλ‹€.
@tf.function def fn(x, b): if b: x = MaskedTensor(x, tf.less(x, 0)) else: x = MaskedTensor(x, tf.greater(x, 0)) for i in tf.range(5 if b else 7): x = x.with_values(x.values + 1 / 2) return x print(fn(tf.constant([1., -2, 3]), tf.constant(True))) print(fn(tf.constant([1., -2, 3]), tf.constant(False)))
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
c8252206b963a7da687910c1730d0398
μΌ€λΌμŠ€ tf.keras λŠ” λ”₯ λŸ¬λ‹ λͺ¨λΈμ„ κ΅¬μΆ•ν•˜κ³  ν›ˆλ ¨ν•˜κΈ° μœ„ν•œ TensorFlow의 κ³ κΈ‰ APIμž…λ‹ˆλ‹€. ν™•μž₯ μœ ν˜•μ€ Keras λͺ¨λΈμ— λŒ€ν•œ μž…λ ₯으둜 μ „λ‹¬λ˜κ³ , Keras 계측 간에 μ „λ‹¬λ˜κ³ , Keras λͺ¨λΈμ—μ„œ λ°˜ν™˜λ  수 μžˆμŠ΅λ‹ˆλ‹€. KerasλŠ” ν˜„μž¬ ν™•μž₯ μœ ν˜•μ— 두 가지 μš”κ΅¬ 사항을 μ μš©ν•©λ‹ˆλ‹€. 배치 κ°€λŠ₯ν•΄μ•Ό ν•©λ‹ˆλ‹€(μœ„μ˜ "배치 κ°€λŠ₯ν•œ ExtensionType" μ°Έμ‘°). shape μ΄λΌλŠ” ν•„λ“œ λ˜λŠ” 속성이 μžˆμ–΄μ•Ό ν•©λ‹ˆλ‹€. shape[0] 은 배치 μ°¨μ›μœΌλ‘œ κ°„μ£Όλ©λ‹ˆλ‹€. λ‹€μŒ 두 ν•˜μœ„ μ„Ήμ…˜μ—μ„œλŠ” ν™•μž₯ μœ ν˜•μ„ Keras와 ν•¨κ»˜ μ‚¬μš©ν•˜λŠ” 방법을 λ³΄μ—¬μ£ΌλŠ” 예λ₯Ό μ œκ³΅ν•©λ‹ˆλ‹€. Keras 예: Network 첫 번째 μ˜ˆμ—μ„œλŠ” λ…Έλ“œ κ°„μ˜ λΆ€ν•˜ λΆ„μ‚° μž‘μ—…μ— μ‚¬μš©ν•  수 μžˆλŠ” μœ„μ˜ "Batchable ExtensionTypes" μ„Ήμ…˜μ— μ •μ˜λœ Network κ·Έ μ •μ˜λŠ” μ—¬κΈ°μ—μ„œ λ°˜λ³΅λ©λ‹ˆλ‹€.
class Network(tf.experimental.BatchableExtensionType): shape: tf.TensorShape # batch shape. A single network has shape=[]. work: tf.Tensor # work[*shape, n] = work left to do at node n bandwidth: tf.Tensor # bandwidth[*shape, n1, n2] = bandwidth from n1->n2 def __init__(self, work, bandwidth): self.work = tf.convert_to_tensor(work) self.bandwidth = tf.convert_to_tensor(bandwidth) work_batch_shape = self.work.shape[:-1] bandwidth_batch_shape = self.bandwidth.shape[:-2] self.shape = work_batch_shape.merge_with(bandwidth_batch_shape) def __repr__(self): return network_repr(self) single_network = Network( # A single network w/ 4 nodes. work=[8.0, 5, 12, 2], bandwidth=[[0.0, 1, 2, 2], [1, 0, 0, 2], [2, 0, 0, 1], [2, 2, 1, 0]]) batch_of_networks = Network( # Batch of 2 networks, each w/ 2 nodes. work=[[8.0, 5], [3, 2]], bandwidth=[[[0.0, 1], [1, 0]], [[0, 2], [2, 0]]])
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
682f8cf0226763ad199827023800b1c5
Network λ₯Ό μ²˜λ¦¬ν•˜λŠ” μƒˆλ‘œμš΄ Keras 계측을 μ •μ˜ν•  수 μžˆμŠ΅λ‹ˆλ‹€.
class BalanceNetworkLayer(tf.keras.layers.Layer): """Layer that balances work between nodes in a network. Shifts work from more busy nodes to less busy nodes, constrained by bandwidth. """ def call(self, inputs): # This function is defined above, in "Batchable ExtensionTypes" section. return balance_work_greedy(inputs)
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
8758e00d1795d0dc915294f8b0d5e468
그런 λ‹€μŒ 이 λ ˆμ΄μ–΄λ₯Ό μ‚¬μš©ν•˜μ—¬ κ°„λ‹¨ν•œ λͺ¨λΈμ„ λ§Œλ“€ 수 μžˆμŠ΅λ‹ˆλ‹€. ExtensionType 을 λͺ¨λΈμ— μ œκ³΅ν•˜λ €λ©΄ type_spec 이 ν™•μž₯ μœ ν˜•μ˜ TypeSpec tf.keras.layer.Input λ ˆμ΄μ–΄λ₯Ό μ‚¬μš©ν•  수 μžˆμŠ΅λ‹ˆλ‹€. Keras λͺ¨λΈμ„ μ‚¬μš©ν•˜μ—¬ 배치λ₯Ό μ²˜λ¦¬ν•˜λŠ” 경우 type_spec 에 배치 차원이 ν¬ν•¨λ˜μ–΄μ•Ό ν•©λ‹ˆλ‹€.
input_spec = Network.Spec(shape=None, work=tf.TensorSpec(None, tf.float32), bandwidth=tf.TensorSpec(None, tf.float32)) model = tf.keras.Sequential([ tf.keras.layers.Input(type_spec=input_spec), BalanceNetworkLayer(), ])
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
cd7a8ba20190183a804fe18c38b7fb95
λ§ˆμ§€λ§‰μœΌλ‘œ 단일 λ„€νŠΈμ›Œν¬μ™€ λ„€νŠΈμ›Œν¬ λ°°μΉ˜μ— λͺ¨λΈμ„ μ μš©ν•  수 μžˆμŠ΅λ‹ˆλ‹€.
model(single_network) model(batch_of_networks)
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
6c41e93d1adf2fe7e56a649caa04a5fb
μΌ€λΌμŠ€ μ˜ˆμ‹œ: MaskedTensor 이 μ˜ˆμ—μ„œ MaskedTensor Keras λ₯Ό μ§€μ›ν•˜λ„λ‘ ν™•μž₯λ˜μ—ˆμŠ΅λ‹ˆλ‹€. shape values ν•„λ“œμ—μ„œ κ³„μ‚°λ˜λŠ” μ†μ„±μœΌλ‘œ μ •μ˜λ©λ‹ˆλ‹€. TypeSpec λͺ¨λ‘μ— 이 속성을 μΆ”κ°€ν•΄μ•Ό ν•©λ‹ˆλ‹€. MaskedTensor SavedModel 직렬화에 ν•„μš”ν•œ __name__ λ³€μˆ˜λ„ μ •μ˜ν•©λ‹ˆλ‹€(μ•„λž˜ μ°Έμ‘°).
class MaskedTensor(tf.experimental.BatchableExtensionType): # __name__ is required for serialization in SavedModel; see below for details. __name__ = 'extension_type_colab.MaskedTensor' values: tf.Tensor mask: tf.Tensor shape = property(lambda self: self.values.shape) dtype = property(lambda self: self.values.dtype) def with_default(self, default): return tf.where(self.mask, self.values, default) def __repr__(self): return masked_tensor_str(self.values, self.mask) class Spec: def __init__(self, shape, dtype=tf.float32): self.values = tf.TensorSpec(shape, dtype) self.mask = tf.TensorSpec(shape, tf.bool) shape = property(lambda self: self.values.shape) dtype = property(lambda self: self.values.dtype) def with_shape(self): return MaskedTensor.Spec(tf.TensorSpec(shape, self.values.dtype), tf.TensorSpec(shape, self.mask.dtype))
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
87709f55a5ad3fdcd246851847ac5395
λ‹€μŒμœΌλ‘œ λ””μŠ€νŒ¨μΉ˜ λ°μ½”λ ˆμ΄ν„°λŠ” μ—¬λŸ¬ TensorFlow API의 κΈ°λ³Έ λ™μž‘μ„ μž¬μ •μ˜ν•˜λŠ” 데 μ‚¬μš©λ©λ‹ˆλ‹€. μ΄λŸ¬ν•œ APIλŠ” ν‘œμ€€ Keras λ ˆμ΄μ–΄(예: Dense MaskedTensor 와 ν•¨κ»˜ ν•΄λ‹Ή λ ˆμ΄μ–΄λ₯Ό μ‚¬μš©ν•  수 μžˆμŠ΅λ‹ˆλ‹€. 이 예의 λͺ©μ μ„ matmul 은 λ§ˆμŠ€ν‚Ήλœ 값을 0으둜 μ²˜λ¦¬ν•˜λ„λ‘ μ •μ˜λ©λ‹ˆλ‹€(즉, μ œν’ˆμ— ν¬ν•¨ν•˜μ§€ μ•ŠκΈ° μœ„ν•΄).
@tf.experimental.dispatch_for_unary_elementwise_apis(MaskedTensor) def unary_elementwise_op_handler(op, x): return MaskedTensor(op(x.values), x.mask) @tf.experimental.dispatch_for_binary_elementwise_apis( Union[MaskedTensor, tf.Tensor], Union[MaskedTensor, tf.Tensor]) def binary_elementwise_op_handler(op, x, y): x = convert_to_masked_tensor(x) y = convert_to_masked_tensor(y) return MaskedTensor(op(x.values, y.values), x.mask & y.mask) @tf.experimental.dispatch_for_api(tf.matmul) def masked_matmul(a: MaskedTensor, b, transpose_a=False, transpose_b=False, adjoint_a=False, adjoint_b=False, a_is_sparse=False, b_is_sparse=False, output_type=None): if isinstance(a, MaskedTensor): a = a.with_default(0) if isinstance(b, MaskedTensor): b = b.with_default(0) return tf.matmul(a, b, transpose_a, transpose_b, adjoint_a, adjoint_b, a_is_sparse, b_is_sparse, output_type)
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
79879cc2ab73070cab7cd7290ebdba9a
그런 λ‹€μŒ ν‘œμ€€ Keras λ ˆμ΄μ–΄λ₯Ό μ‚¬μš©ν•˜μ—¬ MaskedTensor μž…λ ₯을 ν—ˆμš©ν•˜λŠ” Keras λͺ¨λΈμ„ ꡬ성할 수 μžˆμŠ΅λ‹ˆλ‹€.
input_spec = MaskedTensor.Spec([None, 2], tf.float32) masked_tensor_model = tf.keras.Sequential([ tf.keras.layers.Input(type_spec=input_spec), tf.keras.layers.Dense(16, activation="relu"), tf.keras.layers.Dense(1)]) masked_tensor_model.compile(loss='binary_crossentropy', optimizer='rmsprop') a = MaskedTensor([[1., 2], [3, 4], [5, 6]], [[True, False], [False, True], [True, True]]) masked_tensor_model.fit(a, tf.constant([[1], [0], [1]]), epochs=3) print(masked_tensor_model(a))
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
9efa4f45901f7ada6ad536851771af1e
μ €μž₯된 λͺ¨λΈ SavedModel 은 κ°€μ€‘μΉ˜μ™€ 계산을 λͺ¨λ‘ ν¬ν•¨ν•˜λŠ” μ§λ ¬ν™”λœ TensorFlow ν”„λ‘œκ·Έλž¨μž…λ‹ˆλ‹€. Keras λͺ¨λΈ λ˜λŠ” μ‚¬μš©μž 지정 λͺ¨λΈμ—μ„œ ꡬ좕할 수 μžˆμŠ΅λ‹ˆλ‹€. 두 경우 λͺ¨λ‘ ν™•μž₯ μœ ν˜•μ€ SavedModel에 μ˜ν•΄ μ •μ˜λœ ν•¨μˆ˜ 및 λ©”μ†Œλ“œμ™€ ν•¨κ»˜ 투λͺ…ν•˜κ²Œ μ‚¬μš©λ  수 μžˆμŠ΅λ‹ˆλ‹€. __name__ ν•„λ“œκ°€ μžˆλŠ” ν•œ ν™•μž₯ μœ ν˜•μ„ μ²˜λ¦¬ν•˜λŠ” λͺ¨λΈ, 계측 및 ν•¨μˆ˜λ₯Ό μ €μž₯ν•  수 μžˆμŠ΅λ‹ˆλ‹€. 이 이름은 ν™•μž₯ μœ ν˜•μ„ λ“±λ‘ν•˜λŠ” 데 μ‚¬μš©λ˜λ―€λ‘œ λͺ¨λΈμ„ λ‘œλ“œν•  λ•Œ 찾을 수 μžˆμŠ΅λ‹ˆλ‹€. 예: Keras λͺ¨λΈ μ €μž₯ ν™•μž₯ μœ ν˜•μ„ μ‚¬μš©ν•˜λŠ” SavedModel μ‚¬μš©ν•˜μ—¬ μ €μž₯ν•  수 μžˆμŠ΅λ‹ˆλ‹€.
masked_tensor_model_path = tempfile.mkdtemp() tf.saved_model.save(masked_tensor_model, masked_tensor_model_path) imported_model = tf.saved_model.load(masked_tensor_model_path) imported_model(a)
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
4fcf8152ff11dca05cd0f309b1235ae7
예: μ‚¬μš©μž μ •μ˜ λͺ¨λΈ μ €μž₯ SavedModel은 ν™•μž₯ μœ ν˜•μ„ μ²˜λ¦¬ν•˜λŠ” ν•¨μˆ˜λ‘œ tf.Module ν•˜μœ„ 클래슀λ₯Ό μ €μž₯ν•˜λŠ” 데 μ‚¬μš©ν•  μˆ˜λ„ μžˆμŠ΅λ‹ˆλ‹€.
class CustomModule(tf.Module): def __init__(self, variable_value): super().__init__() self.v = tf.Variable(variable_value) @tf.function def grow(self, x: MaskedTensor): """Increase values in `x` by multiplying them by `self.v`.""" return MaskedTensor(x.values * self.v, x.mask) module = CustomModule(100.0) module.grow.get_concrete_function(MaskedTensor.Spec(shape=None, dtype=tf.float32)) custom_module_path = tempfile.mkdtemp() tf.saved_model.save(module, custom_module_path) imported_model = tf.saved_model.load(custom_module_path) imported_model.grow(MaskedTensor([1., 2, 3], [False, True, False]))
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
e33d44ea33c1167bfefa82ee8cbf0983
ExtensionType을 μ‚¬μš©ν•  수 없을 λ•Œ μ €μž₯된 λͺ¨λΈ λ‘œλ“œ ExtensionType 을 μ‚¬μš©ν•˜λŠ” SavedModel 을 λ‘œλ“œν•˜μ§€λ§Œ ν•΄λ‹Ή ExtensionType μ‚¬μš©ν•  수 μ—†λŠ” 경우(즉, κ°€μ Έμ˜€μ§€ μ•Šμ€ 경우) κ²½κ³ κ°€ ν‘œμ‹œλ˜κ³  TensorFlowλŠ” "읡λͺ… ν™•μž₯ μœ ν˜•" 개체λ₯Ό μ‚¬μš©ν•˜λ„λ‘ λŒ€μ²΄ν•©λ‹ˆλ‹€. 이 κ°œμ²΄λŠ” μ›λž˜ μœ ν˜•κ³Ό λ™μΌν•œ ν•„λ“œλ₯Ό κ°–μ§€λ§Œ μ‚¬μš©μž μ •μ˜ λ©”μ†Œλ“œ λ˜λŠ” 속성과 같이 μœ ν˜•μ— μΆ”κ°€ν•œ μΆ”κ°€ μ‚¬μš©μž μ •μ˜κ°€ λΆ€μ‘±ν•©λ‹ˆλ‹€. TensorFlow 제곡과 ν•¨κ»˜ ExtensionType μ‚¬μš© ν˜„μž¬ TensorFlow μ„œλΉ„μŠ€ (및 SavedModel "μ„œλͺ…" μ‚¬μ „μ˜ λ‹€λ₯Έ μ†ŒλΉ„μž)λŠ” λͺ¨λ“  μž…λ ₯ 및 좜λ ₯이 μ›μ‹œ ν…μ„œκ°€ λ˜μ–΄μ•Ό ν•©λ‹ˆλ‹€. ν™•μž₯ μœ ν˜•μ„ μ‚¬μš©ν•˜λŠ” λͺ¨λΈκ³Ό ν•¨κ»˜ TensorFlow μ„œλΉ„μŠ€λ₯Ό μ‚¬μš©ν•˜λ €λŠ” 경우 ν…μ„œμ—μ„œ ν™•μž₯ μœ ν˜• 값을 κ΅¬μ„±ν•˜κ±°λ‚˜ λΆ„ν•΄ν•˜λŠ” 래퍼 λ©”μ„œλ“œλ₯Ό μΆ”κ°€ν•  수 μžˆμŠ΅λ‹ˆλ‹€. 예:
class CustomModuleWrapper(tf.Module): def __init__(self, variable_value): super().__init__() self.v = tf.Variable(variable_value) @tf.function def var_weighted_mean(self, x: MaskedTensor): """Mean value of unmasked values in x, weighted by self.v.""" x = MaskedTensor(x.values * self.v, x.mask) return (tf.reduce_sum(x.with_default(0)) / tf.reduce_sum(tf.cast(x.mask, x.dtype))) @tf.function() def var_weighted_mean_wrapper(self, x_values, x_mask): """Raw tensor wrapper for var_weighted_mean.""" return self.var_weighted_mean(MaskedTensor(x_values, x_mask)) module = CustomModuleWrapper([3., 2., 8., 5.]) module.var_weighted_mean_wrapper.get_concrete_function( tf.TensorSpec(None, tf.float32), tf.TensorSpec(None, tf.bool)) custom_module_path = tempfile.mkdtemp() tf.saved_model.save(module, custom_module_path) imported_model = tf.saved_model.load(custom_module_path) x = MaskedTensor([1., 2., 3., 4.], [False, True, False, True]) imported_model.var_weighted_mean_wrapper(x.values, x.mask)
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
9a7fd2a67bc61f8b07ff67ce7920c150
λ°μ΄ν„°μ„ΈνŠΈ tf.data λŠ” κ°„λ‹¨ν•˜κ³  μž¬μ‚¬μš© κ°€λŠ₯ν•œ λΆ€λΆ„μœΌλ‘œ λ³΅μž‘ν•œ μž…λ ₯ νŒŒμ΄ν”„λΌμΈμ„ ꡬ좕할 수 μžˆλŠ” APIμž…λ‹ˆλ‹€. 핡심 데이터 κ΅¬μ‘°λŠ” tf.data.Dataset 이며, μ΄λŠ” 각 μš”μ†Œκ°€ ν•˜λ‚˜ μ΄μƒμ˜ ꡬ성 μš”μ†Œλ‘œ κ΅¬μ„±λœ 일련의 μš”μ†Œλ₯Ό λ‚˜νƒ€λƒ…λ‹ˆλ‹€. ν™•μž₯ μœ ν˜•μœΌλ‘œ λ°μ΄ν„°μ„ΈνŠΈ λΉŒλ“œ Dataset.from_tensors , Dataset.from_tensor_slices λ˜λŠ” Dataset.from_generator μ‚¬μš©ν•˜μ—¬ ν™•μž₯ μœ ν˜• κ°’μ—μ„œ 데이터 μ„ΈνŠΈλ₯Ό λΉŒλ“œν•  수 μžˆμŠ΅λ‹ˆλ‹€.
ds = tf.data.Dataset.from_tensors(Pastry(5, 5)) iter(ds).next() mt = MaskedTensor(tf.reshape(range(20), [5, 4]), tf.ones([5, 4])) ds = tf.data.Dataset.from_tensor_slices(mt) for value in ds: print(value) def value_gen(): for i in range(2, 7): yield MaskedTensor(range(10), [j%i != 0 for j in range(10)]) ds = tf.data.Dataset.from_generator( value_gen, output_signature=MaskedTensor.Spec(shape=[10], dtype=tf.int32)) for value in ds: print(value)
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
2477207abb42dca6f440ba426bf2e034
ν™•μž₯ μœ ν˜•μ΄ μžˆλŠ” 데이터 μ„ΈνŠΈ 일괄 처리 및 일괄 ν•΄μ œ ν™•μž₯ μœ ν˜•μ˜ 데이터 μ„ΈνŠΈλ₯Ό μ‚¬μš©ν•˜μ—¬ batchand 및 unbatched 수 μžˆμŠ΅λ‹ˆλ‹€ Dataset.batch ADN Dataset.unbatch .
batched_ds = ds.batch(2) for value in batched_ds: print(value) unbatched_ds = batched_ds.unbatch() for value in unbatched_ds: print(value)
site/ko/guide/extension_type.ipynb
tensorflow/docs-l10n
apache-2.0
6cf51e7c2710f3ea5781323b3a4685f6
The concrete loss function can be set via the loss parameter. SGDClassifier supports the following loss functions: <ul> <li>loss="hinge": (soft-margin) linear Support Vector Machine, <li>loss="modified_huber": smoothed hinge loss, <li>loss="log": logistic regression, <li>and all regression losses below. </ul> The first two loss functions are lazy, they only update the model parameters if an example violates the margin constraint, which makes training very efficient and may result in sparser models, even when L2 penalty is used. Using loss="log" or loss="modified_huber" enables the predict_proba method, which gives a vector of probability estimates P(y|x) per sample x:
clf = SGDClassifier(loss="log").fit(X, y) clf.predict_proba([[1., 1.]])
Lectures/Lecture6-Streams/SGD TESTING.ipynb
hethapu/big-data-python-class
mit
87450905665bae4631fde9575806d05c
The default setting is penalty="l2". The L1 penalty leads to sparse solutions, driving most coefficients to zero. The Elastic Net solves some deficiencies of the L1 penalty in the presence of highly correlated attributes. The parameter l1_ratio controls the convex combination of L1 and L2 penalty.
%matplotlib inline # SGD: Maximum Margin Separating hyperplan import numpy as np import matplotlib.pyplot as plt from sklearn.linear_model import SGDClassifier from sklearn.datasets.samples_generator import make_blobs # we create 50 separable points X, Y = make_blobs(n_samples=50, centers=2, random_state=0, cluster_std=0.60) # fit the model clf = SGDClassifier(loss="hinge", alpha=0.01, n_iter=200, fit_intercept=True) clf.fit(X, Y) # plot the line, the points, and the nearest vectors to the plane xx = np.linspace(-1, 5, 10) yy = np.linspace(-1, 5, 10) X1, X2 = np.meshgrid(xx, yy) Z = np.empty(X1.shape) for (i, j), val in np.ndenumerate(X1): x1 = val x2 = X2[i, j] p = clf.decision_function([x1, x2]) Z[i, j] = p[0] levels = [-1.0, 0.0, 1.0] linestyles = ['dashed', 'solid', 'dashed'] colors = 'k' plt.contour(X1, X2, Z, levels, colors=colors, linestyles=linestyles) plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Paired) plt.axis('tight') plt.show()
Lectures/Lecture6-Streams/SGD TESTING.ipynb
hethapu/big-data-python-class
mit
069030658e68ba7b64d2930686d63c38
customized_KoNLPy μ—λŠ” ν˜„μž¬ νŠΈμœ„ν„° ν•œκ΅­μ–΄ 뢄석기 λ§Œμ„ μ΄μš©ν•˜λŠ” wrapping class만 제곡되고 μžˆμŠ΅λ‹ˆλ‹€. customized_KoNLPy의 TwitterλŠ” 본래 KoNLPy의 tag에 μΆ”κ°€λ˜λŠ” ν•¨μˆ˜κ°€ μžˆμŠ΅λ‹ˆλ‹€. Twitter.add_dictionary(words, tag)λŠ” μ‚¬μš©μžκ°€ 사전을 μΆ”κ°€ν•  수 μžˆλŠ” λΆ€λΆ„μž…λ‹ˆλ‹€. 단어λ₯Ό ν•˜λ‚˜μ”© μΆ”κ°€ν•  수 μžˆμŠ΅λ‹ˆλ‹€. μΆ”κ°€ν•œ λ’€ Twitter의 μˆ¨κΉ€ λ³€μˆ˜μΈ _dictionary._pos2wordsλ₯Ό 확인해보면 μž…λ ₯ν•œ 단어듀을 λ³Ό 수 μžˆμŠ΅λ‹ˆλ‹€. git clone을 ν•œ μƒνƒœμ—μ„œ tutorial codeλ₯Ό μ΄μš©ν•˜μ‹ λ‹€λ©΄ μ•„λž˜μ˜ μ½”λ“œλ₯Ό μ‹€ν–‰ν•˜μ—¬ pathλ₯Ό μΆ”κ°€ν•˜μ‹­μ‹œμš”
import sys sys.path.append('../') from ckonlpy.tag import Twitter twitter = Twitter() twitter.add_dictionary('이', 'Modifier') twitter.add_dictionary('우리', 'Modifier') twitter.add_dictionary('이번', 'Modifier') twitter.add_dictionary('μ•„μ΄μ˜€μ•„μ΄', 'Noun') twitter.add_dictionary('행사', 'Noun') twitter.add_dictionary('아이', 'Noun') twitter.add_dictionary('λ²ˆκ²ƒ', 'Noun') twitter.add_dictionary('것', 'Noun') twitter.add_dictionary('은', 'Josa') twitter.add_dictionary('λŠ”', 'Josa') twitter._dictionary._pos2words
customKonlpy/tutorials/usage_of_templatetagger.ipynb
TeamEmily/Emily_server
mit
00f84f7ee3ffa3b124cf70cf540e9305
사전을 μΆ”κ°€ν•œ λ’€, 'μ•„μ΄μ˜€μ•„μ΄'κ°€ λͺ…μ‚¬λ‘œ μ œλŒ€λ‘œ 인식됨을 확인할 수 μžˆμŠ΅λ‹ˆλ‹€.
twitter.pos('μš°λ¦¬μ•„μ΄μ˜€μ•„μ΄λŠ” 정말 μ΄λ»μš”') twitter.pos('μ•„μ΄μ˜€μ•„μ΄ μ΄λ»μš”')
customKonlpy/tutorials/usage_of_templatetagger.ipynb
TeamEmily/Emily_server
mit
759279028f6bd2e152dd00ce4e28f29c
사전을 μΆ”κ°€ν•  λ•Œ, ν•˜λ‚˜μ˜ ν’ˆμ‚¬μ— λŒ€ν•˜μ—¬ λ™μ‹œμ— μ—¬λŸ¬ 개의 단어셋을 μž…λ ₯ν•  μˆ˜λ„ μžˆμŠ΅λ‹ˆλ‹€. Twitter.add_dictionary(words, tag)λŠ” ν•œλ²ˆμ— list of str ν˜•μ‹μ˜ μ—¬λŸ¬ 개의 단어듀을 μž…λ ₯ν•  μˆ˜λ„ μžˆμŠ΅λ‹ˆλ‹€.
twitter.add_dictionary(['νŠΈμ™€μ΄μŠ€', 'tt', '트λ‘₯이', 'κΊΌ', '우리'], 'Noun') twitter._dictionary._pos2words twitter.pos('νŠΈμ™€μ΄μŠ€ttλŠ” μ’‹μ•„μš”')
customKonlpy/tutorials/usage_of_templatetagger.ipynb
TeamEmily/Emily_server
mit
997a518c1d14117c75e6748344035842
νŠΈμœ„ν„° λΆ„μ„κΈ°μ˜ 쑰사사전을 μ΄μš©ν•  μˆ˜λ„ μžˆμŠ΅λ‹ˆλ‹€. Twitter()λ₯Ό λ§Œλ“€ λ•Œ argumentλ₯Ό 넣을 수 μžˆμŠ΅λ‹ˆλ‹€.
twitter1 = Twitter(load_default_dictionary=True) len(twitter1._dictionary._pos2words['Josa'])
customKonlpy/tutorials/usage_of_templatetagger.ipynb
TeamEmily/Emily_server
mit
5e6022bdb93143e1a6a3c33de12de1f0
ν•˜μ§€λ§Œ 아직 '우리트λ‘₯이꺼ttλŠ”' μ΄λΌλŠ” μ–΄μ ˆμ΄ μ œλŒ€λ‘œ μΈμ‹λ˜μ§€ μ•ŠμŠ΅λ‹ˆλ‹€. κ·Έ μ΄μœ λŠ” templates에 'Noun + Noun + Josa'κ°€ μ—†μ—ˆκΈ° λ•Œλ¬Έμž…λ‹ˆλ‹€. 이 κ²½μš°μ—λŠ” KoNLPy에 ν•΄λ‹Ή μ–΄μ ˆμ„ λΆ„μ„ν•˜λΌκ³  λ³΄λƒ…λ‹ˆλ‹€. ν•˜μ§€λ§Œ '트λ‘₯이'λΌλŠ” 단어λ₯Ό μ•Œμ§€ λͺ»ν•΄μ„œ μ œλŒ€λ‘œ μΈμ‹λ˜μ§€ μ•ŠμŠ΅λ‹ˆλ‹€.
twitter.pos('우리트λ‘₯이꺼ttλŠ” μ’‹μ•„μš”')
customKonlpy/tutorials/usage_of_templatetagger.ipynb
TeamEmily/Emily_server
mit
9db0811c41aeb5efa7f3a6c531264e02
ν˜„μž¬λŠ” customized_tagger둜 νƒ¬ν”Œλ¦Ώ 기반 ν† ν¬λ‚˜μ΄μ €λ₯Ό μ΄μš©ν•˜κ³  μžˆμŠ΅λ‹ˆλ‹€. μ–΄λ–€ νƒ¬ν”Œλ¦Ώμ΄ λ“€μ–΄μžˆλŠ”μ§€ ν™•μΈν•˜κΈ° μœ„ν•΄μ„œλŠ” μ•„λž˜ 뢀뢄을 ν™•μΈν•˜λ©΄ λ©λ‹ˆλ‹€. twitter._customized_tagger.templates ν˜„μž¬λŠ” λ‹€μŒμ˜ νƒ¬ν”Œλ¦Ώμ΄ μž…λ ₯λ˜μ–΄ μžˆμŠ΅λ‹ˆλ‹€.
twitter._customized_tagger.templates
customKonlpy/tutorials/usage_of_templatetagger.ipynb
TeamEmily/Emily_server
mit
dcaa67d50950617cfd5a4986ae836f7a
κΈ°λ³Έ νƒ¬ν”Œλ¦Ώμ€ customized_konlpy/data/templates/twitter_templates0 에 μ €μž₯λ˜μ–΄ μžˆμŠ΅λ‹ˆλ‹€. text ν˜•μ‹μ˜ 파일이며, λ„μ–΄μ“°κΈ°λ‘œ μ•„λž˜μ™€ 같은 κΈ°λ³Έ ν…œν”Œλ¦Ώμ„ μ§€μ •ν•˜λ©΄ λ©λ‹ˆλ‹€.
cat ../ckonlpy/data/templates/twitter_templates0
customKonlpy/tutorials/usage_of_templatetagger.ipynb
TeamEmily/Emily_server
mit
1a4029d8d357eb3a7689c2d9473e2ec3
μž‘μ—… 쀑 νƒ¬ν”Œλ¦Ώμ„ μΆ”κ°€ν•˜κ³  μ‹Άλ‹€λ©΄, νƒ¬ν”Œλ¦Ώμ€ ν•˜λ‚˜ λ‹¨μœ„λ‘œ tuple of str의 ν˜•μ‹μœΌλ‘œ μž…λ ₯ν•  수 μžˆμŠ΅λ‹ˆλ‹€. _customized_tagger.add_a_templated()은 μ€‘λ³΅λ˜λŠ” νƒ¬ν”Œλ¦Ώμ΄ μ•„λ‹Œμ§€ ν™•μΈν•œ λ‹€μŒ νƒ¬ν”Œλ¦Ώμ„ μΆ”κ°€ν•˜λŠ” ν•¨μˆ˜μž…λ‹ˆλ‹€.
twitter._customized_tagger.add_a_template(('Modifier', 'Noun', 'Noun', 'Noun', 'Josa')) twitter._customized_tagger.templates
customKonlpy/tutorials/usage_of_templatetagger.ipynb
TeamEmily/Emily_server
mit
cdca3caa4c864a97a5c3ec35106cb483
('Noun', 'Noun', 'Josa')κ°€ μž…λ ₯λ˜μ—ˆκ³ , 'νŠΈμ™€μ΄μŠ€', 'tt'κ°€ λͺ…사인지 μ•Œκ³  있기 λ•Œλ¬Έμ— μ•„λž˜ λ¬Έμž₯은 μ œλŒ€λ‘œ 인식이 λ©λ‹ˆλ‹€.
twitter.pos('우리트λ‘₯이꺼ttλŠ” μ’‹μ•„μš”')
customKonlpy/tutorials/usage_of_templatetagger.ipynb
TeamEmily/Emily_server
mit
68ab44e7bf5f536e3b0738e4ace7e6e1
사전을 μΆ”κ°€ν•  λ•Œ, νŠΈμœ„ν„° ν•œκ΅­μ–΄ 뢄석기에 μ‘΄μž¬ν•˜μ§€ μ•ŠλŠ” νƒœκ·Έκ°€ λ“€μ–΄κ°€λŠ” 것을 λ°©μ§€ν•˜κΈ° μœ„ν•΄ tag의 값을 ν™•μΈν•˜λŠ” 뢀뢄이 κ΅¬ν˜„λ˜μ–΄ μžˆμŠ΅λ‹ˆλ‹€. twitter.tagset &gt;&gt;&gt; {'Adjective': 'ν˜•μš©μ‚¬', 'Adverb': '뢀사', 'Alpha': 'μ•ŒνŒŒλ²³', 'Conjunction': '접속사', 'Determiner': 'κ΄€ν˜•μ‚¬', 'Eomi': 'μ–΄λ―Έ', 'Exclamation': '감탄사', 'Foreign': 'μ™Έκ΅­μ–΄, ν•œμž 및 κΈ°νƒ€κΈ°ν˜Έ', 'Hashtag': 'νŠΈμœ„ν„° ν•΄μ‰¬νƒœκ·Έ', 'Josa': '쑰사', 'KoreanParticle': '(ex: γ…‹γ…‹)', 'Modifier': 'κ΄€ν˜•μ‚¬', 'Noun': 'λͺ…사', 'Number': '숫자', 'PreEomi': '선어말어미', 'Punctuation': 'ꡬ두점', 'ScreenName': 'νŠΈμœ„ν„° 아이디', 'Suffix': '접미사', 'Unknown': '미등둝어', 'Verb': '동사'} twitter.tagset에 λ“±λ‘λ˜μ–΄ μžˆμ§€ μ•ŠλŠ” ν’ˆμ‚¬μ— λŒ€ν•΄μ„œλŠ” ValueErrorλ₯Ό raise ν•©λ‹ˆλ‹€.
twitter.add_dictionary('lovit', 'Name')
customKonlpy/tutorials/usage_of_templatetagger.ipynb
TeamEmily/Emily_server
mit
c9e59e0acf3d94fe305014220d28824e
ν•˜μ§€λ§Œ Twitter.add_dictionary(words, tag, force=True)둜 단어λ₯Ό 사전에 μž…λ ₯ν•˜λ©΄ μ•Œλ €μ§€μ§€ μ•Šμ€ ν’ˆμ‚¬λΌ ν•˜λ”λΌλ„ μž…λ ₯ν•  수 μžˆμŠ΅λ‹ˆλ‹€.
twitter.add_dictionary('lovit', 'Name', force=True) twitter._dictionary._pos2words
customKonlpy/tutorials/usage_of_templatetagger.ipynb
TeamEmily/Emily_server
mit
8291e4a999bf2aa12a124aa26b0f9465
'Name'μ΄λΌλŠ” 클래슀 (더이상 ν’ˆμ‚¬κ°€ μ•„λ‹ˆλ―€λ‘œ)λ₯Ό μ΄μš©ν•˜λŠ” νƒ¬ν”Œλ¦Ώμ„ ν•˜λ‚˜ μž…λ ₯ν•œ λ’€ pos에 μž…λ ₯ν•˜λ©΄ μ–΄μ ˆ 'lovit은' customized_tagger에 μ˜ν•˜μ—¬ μ²˜λ¦¬κ°€ 되고, μ‚¬μš©μž 사전에 μ•Œλ €μ§€μ§€ μ•Šμ€ μ–΄μ ˆμΈ 'μ‘Έλ €'λŠ” 본래의 νŠΈμœ„ν„° 뢄석기에 μ˜ν•˜μ—¬ μ²˜λ¦¬κ°€ λ©λ‹ˆλ‹€.
twitter._customized_tagger.add_a_template(('Name', 'Josa')) print(twitter._customized_tagger.templates) twitter.pos('lovit은 μ΄λ¦„μž…λ‹ˆλ‹€.')
customKonlpy/tutorials/usage_of_templatetagger.ipynb
TeamEmily/Emily_server
mit
2ca26fd2c267408ac32d8a07ca5de3ca
Templatesλ₯Ό μ΄μš©ν•˜μ—¬λ„ 후보가 μ—¬λŸ¬ 개 λ‚˜μ˜¬ 수 μžˆμŠ΅λ‹ˆλ‹€. μ—¬λŸ¬ 개 후보 μ€‘μ—μ„œ best λ₯Ό μ„ νƒν•˜λŠ” ν•¨μˆ˜λ₯Ό 직접 λ””μžμΈ ν•  수 도 μžˆμŠ΅λ‹ˆλ‹€. 이처럼 λͺ‡ 개의 점수 기쀀을 λ§Œλ“€κ³ , 각 κΈ°μ€€μ˜ weightλ₯Ό λΆ€μ—¬ν•˜λŠ” 방식은 νŠΈμœ„ν„° λΆ„μ„κΈ°μ—μ„œ μ΄μš©ν•˜λŠ” 방식인데, 직관적이고 νŠœλ‹ κ°€λŠ₯ν•΄μ„œ 맀우 쒋은 방식이라 μƒκ°ν•©λ‹ˆλ‹€.
score_weights = { 'num_nouns': -0.1, 'num_words': -0.2, 'no_noun': -1 } def my_score(candidate): num_nouns = len([w for w,t in candidate if t == 'Noun']) num_words = len(candidate) no_noun = 1 if num_nouns == 0 else 0 score = (num_nouns * score_weights['num_nouns'] + num_words * score_weights['num_words'] + no_noun * score_weights['no_noun']) return score twitter.set_selector(score_weights, my_score)
customKonlpy/tutorials/usage_of_templatetagger.ipynb
TeamEmily/Emily_server
mit
ea3749008fdecf4510e99e06a2e1c537
1) Explore the dataset Numerical exploration Load the csv file into memory using Pandas Describe each attribute is it discrete? is it continuous? is it a number? is it text? Identify the target Check if any values are missing Load the csv file into memory using Pandas
df = pd.read_csv('titanic-train.csv')
Titanic Survival Workshop.ipynb
Dataweekends/odsc_intro_to_data_science
mit
004435b95066fb981740098519bdca0a
What's the content of df ?
df.head(3)
Titanic Survival Workshop.ipynb
Dataweekends/odsc_intro_to_data_science
mit
51ab71c672f8bb6f6c385ca426b41802
Is Pclass a continuous or discrete class?
df['Pclass'].value_counts()
Titanic Survival Workshop.ipynb
Dataweekends/odsc_intro_to_data_science
mit
15a579e74418ebbbd445b72ec20d36b9
What about these: ('SibSp', 'Parch')?
df['SibSp'].value_counts() df['Parch'].value_counts()
Titanic Survival Workshop.ipynb
Dataweekends/odsc_intro_to_data_science
mit
43087b46045c0fb2f274f7ad0bc799be
and what about these: ('Ticket', 'Fare', 'Cabin', 'Embarked')?
df[['Ticket', 'Fare', 'Cabin']].head(3) df['Embarked'].value_counts()
Titanic Survival Workshop.ipynb
Dataweekends/odsc_intro_to_data_science
mit
fedbf4e59ffbd9410d90071a2404b376
Identify the target What are we trying to predict? ah, yes... Survival!
df['Survived'].value_counts()
Titanic Survival Workshop.ipynb
Dataweekends/odsc_intro_to_data_science
mit
2d687309aef8582c3a4eb7b695407cda
Mental notes so far: Dataset contains 891 entries 1 Target column (Survived) 11 Features: 6 numerical, 5 text 1 useless (PassengerId) 3 categorical (Pclass, Sex, Embarked) 4 numerical, > 0 (Age, SibSp, Parch, Fare) 3 not sure how to treat (Name, Ticket, Cabin) Age is only available for 714 passengers Cabin is only available for 204 passengers Embarked is missing for 2 passengers Visual exploration plot the distribution of Age impute the missing values for Age using the median Age check the influence of Age, Sex and Class on Survival Plot the distribution of Age
df['Age'].plot(kind='hist', figsize=(10,6)) plt.title('Distribution of Age', size = '20') plt.xlabel('Age', size = '20') plt.ylabel('Number of passengers', size = '20') median_age = df['Age'].median() plt.axvline(median_age, color = 'r') median_age
Titanic Survival Workshop.ipynb
Dataweekends/odsc_intro_to_data_science
mit
ab67877778d4929fc6718ccd0e67b544
impute the missing values for Age using the median Age
df['Age'].fillna(median_age, inplace = True) df.info()
Titanic Survival Workshop.ipynb
Dataweekends/odsc_intro_to_data_science
mit
e55b3855775444b41eedec9b119f7417
check the influence of Age
df[df['Survived']==1]['Age'].plot(kind='hist', bins = 10, range = (0,100), figsize=(10,6), alpha = 0.3, color = 'g') df[df['Survived']==0]['Age'].plot(kind='hist', bins = 10, range = (0,100), figsize=(10,6), alpha = 0.3, color = 'r') plt.title('Distribution of Age', size = '20') plt.xlabel('Age', size = '20') plt.ylabel('Number of passengers', size = '20') plt.legend(['Survived', 'Dead']) plt.show()
Titanic Survival Workshop.ipynb
Dataweekends/odsc_intro_to_data_science
mit
0f244b371dd0e180aaddd29db7d059cd
Check the influence of Sex on Survival
survival_by_gender = df[['Sex','Survived']].pivot_table(columns = ['Survived'], index = ['Sex'], aggfunc=len) survival_by_gender survival_by_gender.plot(kind = 'bar', stacked = True) plt.show()
Titanic Survival Workshop.ipynb
Dataweekends/odsc_intro_to_data_science
mit
4a148d4b26b3ae3722b23423d00636ab
Check the influence of Pclass on Survival
survival_by_Pclass = df[['Pclass','Survived']].pivot_table(columns = ['Survived'], index = ['Pclass'], aggfunc=len) survival_by_Pclass survival_by_Pclass.plot(kind = 'bar', stacked = True) plt.show()
Titanic Survival Workshop.ipynb
Dataweekends/odsc_intro_to_data_science
mit
2b0cb8a68216e93c0e3ff21475730da1
Ok, so, Age and Pclass seem to have some influence on survival rate. Let's build a simple model to test that Define a new feature called "Male" that is 1 if Sex = 'male' and 0 otherwise
df['Male'] = df['Sex'].map({'male': 1, 'female': 0}) df[['Sex', 'Male']].head()
Titanic Survival Workshop.ipynb
Dataweekends/odsc_intro_to_data_science
mit
9130ac2dd03761697b1892f63f78a376
Define simplest model as benchmark The simplest model is a model that predicts 0 for everybody, i.e. no survival. How good is it?
actual_dead = len(df[df['Survived'] == 0]) total_passengers = len(df) ratio_of_dead = actual_dead / float(total_passengers) print "If I predict everybody dies, I'm correct %0.1f %% of the time" % (100 * ratio_of_dead) df['Survived'].value_counts()
Titanic Survival Workshop.ipynb
Dataweekends/odsc_intro_to_data_science
mit
e9723fb8c046dae9af1762c254cdc5d5
We need to do better than that Define features (X) and target (y) variables
X = df[['Male', 'Pclass', 'Age']] y = df['Survived']
Titanic Survival Workshop.ipynb
Dataweekends/odsc_intro_to_data_science
mit
99269f99e12c5f2f6568e99e4ef153d4
Initialize a decision tree model
from sklearn.tree import DecisionTreeClassifier model = DecisionTreeClassifier(random_state=0) model
Titanic Survival Workshop.ipynb
Dataweekends/odsc_intro_to_data_science
mit
8bcb93f2d8050df6395859fc9ddd7dc1
Split the features and the target into a Train and a Test subsets. Ratio should be 80/20
from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state=0)
Titanic Survival Workshop.ipynb
Dataweekends/odsc_intro_to_data_science
mit
bac87139cb9b24c04878a2382b9a69d3
Print the confusion matrix for the decision tree model
from sklearn.metrics import confusion_matrix y_pred = model.predict(X_test) print "\n=======confusion matrix==========" print confusion_matrix(y_test, y_pred)
Titanic Survival Workshop.ipynb
Dataweekends/odsc_intro_to_data_science
mit
d3a88ea27ec6c9677ce1b7fc11966498
Numpy: Arrays und effiziente Berechnungen Das Herzstück von Numpy ist das Array. Dieser Datentyp reprÀsentiert eine Matrix und ist unter der Haube in C implementiert. Dabei wird großer Wert auf effiziente Speichernutzung gelegt. Der gÀngige Satz "Python ist viel zu langsam" ist also nicht zwingend wahr. Wir kânnen Arrays auf verschiedene Arten erzeugen.
xs = np.array([1, 2, 3, 4]) # Konvertiert eine Python-Liste in ein Numpy-Array print(xs) ys = np.arange(4) # Erzeugt ein Array analog zur `range` Funktion print(ys)
tutorials/Wissenschaftliches Python Tutorial.ipynb
kdungs/teaching-SMD2-2016
mit
8dc220ac508d3236ac7571747dc14e88
Numpy Arrays unterstΓΌtzen arithmetische Operationen, die wiederum effizient implementiert sind. Beispielsweise lassen sich zwei Arrays (elementweise) addieren sofern sie die gleichen Dimensionen haben.
xs + ys
tutorials/Wissenschaftliches Python Tutorial.ipynb
kdungs/teaching-SMD2-2016
mit
d67dffedc432820303af1aad4d5afa70
Um einen Überblick über alle Features von Numpy zu bekommen, kânnen wir die Hilfe zu Rate ziehen. ZusÀtzlich zur help Funktion bietet IPython auch die ?-Magie mit einer besseren Integration in Jupyter
np?
tutorials/Wissenschaftliches Python Tutorial.ipynb
kdungs/teaching-SMD2-2016
mit
d4f6f1504f0170445be97e87f5fe844f
Für die Übungsaufgaben werden wir hÀufig Zufallszahlen brauchen. Dafür bietet sich die Verwendung von np.random an.
np.random? n_events = 10000 gauss = np.random.normal(2, 3, size=n_events) # Erzeuge 10000 Gauß-verteilte Zufallszahlen # mit Β΅=2 und Οƒ=3.
tutorials/Wissenschaftliches Python Tutorial.ipynb
kdungs/teaching-SMD2-2016
mit
3844e1995dea7be35e0be51641f9bf61
Matplotlib: Schâne Plots Matplotlib bietet sehr intuitive Funktionen um Daten darzustellen. Die sehr ausführliche Dokumentation bietet einen guten Überblick. Wir benutzen an dieser Stelle nur das pyplot Submodul, das uns ein einfaches Interface für die KernfunktionalitÀt bietet. In der Matplotlib Galerie finden sich viele schâne Beispiele mit Codeschnipseln. Um unsere Gauß-verteilten Zufallszahlen zu histogrammieren benutzen wir einfach plt.hist. Außerdem setzen wir gleich Achsenbeschriftungen.
plt.hist(gauss) plt.xlabel('Wert') plt.ylabel('Absolute HΓ€ufigkeit')
tutorials/Wissenschaftliches Python Tutorial.ipynb
kdungs/teaching-SMD2-2016
mit
8cf65b1f1405a1054f2e102577f5925f
Falls dir dieser Plot zu steril ist, kΓΆnnen wir den Stil der bekannten R-Bibliothek ggplot2 verwenden.
plt.style.use('ggplot')
tutorials/Wissenschaftliches Python Tutorial.ipynb
kdungs/teaching-SMD2-2016
mit
4303ccfe874f2cf746c947f8a487abe1
Wir wollen nun die Anzahl Bins erhΓΆhen und zusΓ€tzlich das Histogramm normieren, damit wir die normierte Verteilungsfunktion (PDF) eintragen kΓΆnnen.
plt.hist(gauss, bins=20, normed=True) plt.xlabel('Wert') plt.ylabel('Relative HΓ€ufigkeit')
tutorials/Wissenschaftliches Python Tutorial.ipynb
kdungs/teaching-SMD2-2016
mit
54c2bc8e31fc6aa79ae921192aee2185