text stringlengths 256 65.5k |
|---|
View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook
前のチュートリアルでは、機械学習の基本構成ブロックの1つである自動微分について TensorFlow の API を学習しました。 このチュートリアルでは、これまでのチュートリアルに出てきた TensorFlow の基本要素を使って、単純な機械学習を実行します。
TensorFlow には tf.keras が含まれています。tf.kerasは、抽象化により決まり切った記述を削減し、柔軟さと性能を犠牲にすることなく TensorFlow をやさしく使えるようにする、高度なニューラルネットワーク API です。開発には tf.Keras API を使うことを強くおすすめします。しかしながら、この短いチュートリアルでは、しっかりした基礎を身につけていただくために、ニューラルネットワークの訓練についていちから学ぶことにします。
設定
import tensorflow as tf
変数
TensorFlow のテンソルはイミュータブルでステートレスなオブジェクトです。しかしながら、機械学習モデルには変化する状態が必要です。モデルの訓練が進むにつれて、推論を行うおなじコードが異なる振る舞いをする必要があります(望むべくはより損失の少なくなるように)。この計算が進むにつれて変化する必要がある状態を表現するために、Python が状態を保つプログラミング言語であることを利用することができます。
# Python の状態を使う
x = tf.zeros([10, 10])
x += 2 # これは x = x + 2 と等価で, x の元の値を変えているわけではない
print(x)
tf.Tensor( [[2. 2. 2. 2. 2. 2. 2. 2. 2. 2.] [2. 2. 2. 2. 2. 2. 2. 2. 2. 2.] [2. 2. 2. 2. 2. 2. 2. 2. 2. 2.] [2. 2. 2. 2. 2. 2. 2. 2. 2. 2.] [2. 2. 2. 2. 2. 2. 2. 2. 2. 2.] [2. 2. 2. 2. 2. 2. 2. 2. 2. 2.] [2. 2. 2. 2. 2. 2. 2. 2. 2. 2.] [2. 2. 2. 2. 2. 2. 2. 2. 2. 2.] [2. 2. 2. 2. 2. 2. 2. 2. 2. 2.] [2. 2. 2. 2. 2. 2. 2. 2. 2. 2.]], shape=(10, 10), dtype=float32)
TensorFlow にはステートフルな演算が組み込まれているので、状態を表現するのに低レベルの Python による表現を使うよりは簡単なことがしばしばあります。
tf.Variableオブジェクトは値を保持し、何も指示しなくともこの保存された値を読み出します。TensorFlow の変数に保持された値を操作する演算(tf.assign_sub, tf.scatter_update, など)が用意されています。
v = tf.Variable(1.0)
# Python の `assert` を条件をテストするデバッグ文として使用
assert v.numpy() == 1.0
# `v` に値を再代入
v.assign(3.0)
assert v.numpy() == 3.0
# `v` に TensorFlow の `tf.square()` 演算を適用し再代入
v.assign(tf.square(v))
assert v.numpy() == 9.0
tf.Variableを使った計算は、勾配計算の際に自動的にトレースされます。埋め込みを表す変数では、TensorFlow は既定でスパースな更新を行います。これは計算量やメモリ使用量においてより効率的です。
tf.Variableはあなたのコードを読む人にその状態の一部がミュータブルであることを示す方法でもあります。
線形モデルの適合
これまでに学んだ Tensor、 Variable、 そして GradientTapeという概念を使って、簡単なモデルの構築と訓練を行ってみましょう。通常、これには次のようないくつかの手順が含まれます。
モデルの定義
損失関数の定義
訓練データの取得
訓練データを使って実行し、"optimizer" を使って変数をデータに適合
ここでは、f(x) = x * W + bという簡単な線形モデルを作ります。このモデルには W (重み) と b (バイアス) の2つの変数があります。十分訓練されたモデルが W = 3.0 と b = 2.0 になるようなデータを人工的に作ります。
モデルの定義
変数と計算をカプセル化する単純なクラスを定義してみましょう。
class Model(object):
def __init__(self):
# 重みを `5.0` に、バイアスを `0.0` に初期化
# 実際には、これらの値は乱数で初期化するべき(例えば `tf.random.normal` を使って)
self.W = tf.Variable(5.0)
self.b = tf.Variable(0.0)
def __call__(self, x):
return self.W * x + self.b
model = Model()
assert model(3.0).numpy() == 15.0
損失関数の定義
損失関数は、ある入力値に対するモデルの出力がどれだけ出力の目的値に近いかを測るものです。訓練を通じて、この差異を最小化するのがゴールとなります。最小二乗誤差とも呼ばれる L2 損失を使ってみましょう。
def loss(predicted_y, target_y):
return tf.reduce_mean(tf.square(predicted_y - target_y))
訓練データの取得
最初に、入力にランダムなガウス(正規)分布のノイズを加えることで、訓練用データを生成します。
TRUE_W = 3.0
TRUE_b = 2.0
NUM_EXAMPLES = 1000
inputs = tf.random.normal(shape=[NUM_EXAMPLES])
noise = tf.random.normal(shape=[NUM_EXAMPLES])
outputs = inputs * TRUE_W + TRUE_b + noise
モデルを訓練する前に、モデルの予測値を赤で、訓練データを青でプロットすることで、損失を可視化します。
import matplotlib.pyplot as plt
plt.scatter(inputs, outputs, c='b')
plt.scatter(inputs, model(inputs), c='r')
plt.show()
print('Current loss: %1.6f' % loss(model(inputs), outputs).numpy())
Current loss: 8.123021
訓練ループの定義
ネットワークと訓練データが準備できたところで、損失が少なくなるように、重み変数 (W) とバイアス変数 (b) を更新するために、gradient descent (勾配降下法) を使ってモデルを訓練します。勾配降下法にはさまざまな変種があり、我々の推奨する実装である tf.train.Optimizer にも含まれています。しかし、ここでは基本原理から構築するという精神で、自動微分を行う tf.GradientTape と、値を減少させる tf.assign_sub (これは、tf.assign と tf.sub の組み合わせですが)の力を借りて、この基本計算を実装してみましょう。
def train(model, inputs, outputs, learning_rate):
with tf.GradientTape() as t:
current_loss = loss(model(inputs), outputs)
dW, db = t.gradient(current_loss, [model.W, model.b])
model.W.assign_sub(learning_rate * dW)
model.b.assign_sub(learning_rate * db)
最後に、訓練データ全体に対して繰り返し実行し、W と b がどのように変化するかを見てみましょう。
model = Model()
# 後ほどプロットするために、W 値と b 値の履歴を集める
Ws, bs = [], []
epochs = range(10)
for epoch in epochs:
Ws.append(model.W.numpy())
bs.append(model.b.numpy())
current_loss = loss(model(inputs), outputs)
train(model, inputs, outputs, learning_rate=0.1)
print('Epoch %2d: W=%1.2f b=%1.2f, loss=%2.5f' %
(epoch, Ws[-1], bs[-1], current_loss))
# すべてをプロット
plt.plot(epochs, Ws, 'r',
epochs, bs, 'b')
plt.plot([TRUE_W] * len(epochs), 'r--',
[TRUE_b] * len(epochs), 'b--')
plt.legend(['W', 'b', 'True W', 'True b'])
plt.show()
Epoch 0: W=5.00 b=0.00, loss=8.12302 Epoch 1: W=4.65 b=0.38, loss=5.69242 Epoch 2: W=4.36 b=0.68, loss=4.09065 Epoch 3: W=4.12 b=0.93, loss=3.03475 Epoch 4: W=3.92 b=1.12, loss=2.33847 Epoch 5: W=3.76 b=1.28, loss=1.87918 Epoch 6: W=3.63 b=1.41, loss=1.57614 Epoch 7: W=3.52 b=1.51, loss=1.37612 Epoch 8: W=3.44 b=1.60, loss=1.24407 Epoch 9: W=3.36 b=1.66, loss=1.15686
次のステップ
このチュートリアルでは tf.Variable を使って単純な線形モデルの構築と訓練を行いました。 |
Black Mamba - open quickly and other shortcuts
Hi all,
I mainly work with Pythonista on the biggest iPad with external keyboard. And I miss some features like Open quickly, registering shortcuts for my scripts, ... so I decided to add them by myself. Hope it will be natively supported in the future.
Here's screen recording of what I can do with external keyboard only. If you would like to try it, feel free to clone
allfiles from pythonista-site-packages-3 repository to your localsite-packages-3folder. Just download them or usegitvia StaSH.
Then you can hit ...
Cmd /- to toggle comments
Cmd N- to open new tab and to show new file dialog
Cmd Shift N- to open just new tab
Cmd 0(zero) - to toggle library view (I call it navigator)
Cmd W- to close current tab
Cmd Shift W- to close all tabs except current one
Cmd O(letter o) - to quickly open files
If you need more shortcuts for more actions, just let me know and I'll try to add them.
WARNINGIt works, but it'sexperimental,dangerous. There's some swizzling, some direct calls to ObjC instances, passing guessed parameter values, etc. It can crash your Pythonista, you can lost data, ... I warned you :) If you will modify this code and Pythonista will crash during startup, just openpythonista3://in your browser and fix your changes.
Will write more issues, so, Pythonista modules can be enhanced (like
editormodule functions to close tab, open file, etc.). Then I can remove some of these crazy calls.
Back to open quickly ... It just supports Python (
.py) and Markdown (.md) files. That's because I have no clueyetwhat should I pass as editor type, when force reload should be used, ... Also some directories are excluded, so, you know why if some files are missing in open quickly dialog. Also the open quickly dialog has hidden title bar -> no close button -> you need to hit Esc (or Ctrl [ on smart keyboard) to close it.
Anyway, it's pretty scary, fascinating, ... that we can do all these things directly on iPad. Many thanks to Ole for this wonderful tool. This tool let me play with Python on iPad and I can drive our AWS services directly from iPad as well. I'm not forced to bring MBP everywhere, iPad is enough :) Thanks again.
Enjoy!
zrzka
Done :) More updates later, going to take week off :)
Phuket2
@zrzka , ok. Enjoy. Thanks again.
wolf71
cool,perfect,thanks.
and wish @omz next pythonista update can support it native.
@zrzka, hey. I know you are off right now, hope you are having a good break. But when you return could you consider having a (dedicated) key combo (cmd-something) that you can apply to invoke a given wrench menu item. Eg. Maybe the user could pass the name of the wrench item name as a param in the bm.start('StaSh'). I guess it could also be a .py filename. Although that seems a bit more on the wild side. Maybe you have some better ideas than that.
I also mention dedicated, because I think it would get infinitely more complicated to let users to map their own keys.
Anyway thanks again, still only a day or so with what you have done so far and its very useful for me with the Apple iPad Pro keyboard.
@Phuket2 thanks for the suggestions.
Open Quickly
I would like to reuse Open Quickly dialog (when I refactor it) for:
Run Quickly (search for just .pyand run it, basically emulation of open and tapping on play),
Wrench Quickly, same idea, but you can search for wrench items.
UI for mapping keys
I'm not going to provide UI for mapping keys, because it's a lot of work, which can be replaced with something more simpler. I can remove HW shortcuts registration from
bm.start, can provide something likebm.register_default_key_commands. And if you don't call this function in yourpythonista_startup.pyfile, feel free to map your own shortcuts viabm.register_key_command. Or call it and add your own afterbm.start().Shortcut for the wrench item
Just do this in your
pythonista_startup.pyfile:
#!python3
import blackmamba.startup as bm
import blackmamba.key_commands as bkc
import blackmamba.uikit as bui
def launch_wrench_item(name):
print('Wrench item: {}'.format(name))
def launch_stash():
launch_wrench_item('StaSh')
bm.start()
bkc.register_key_command(
bkc.PYTHONISTA_SCOPE_EDITOR,
'S',
bui.UIKeyModifierCommand | bui.UIKeyModifierShift,
launch_stash,
'Launch StaSh')
This maps
Cmd-Shift-Stolaunch_stash, where you can do whatever you want :)
Run Quickly (search for just
Some breaking changes pushed. Check Usage section in the readme. Summary:
external_screen.pymoved toblackmamba/experimental
blackmamba/startup.pytrashed
register_default_key_commandsintroduced inblackmamba/__init__.py
removed scope in blackmamba/key_commands.py
usage examples updated
repository renamed (pythonista-site-packages-3->blackmamba)
If you don't want to check all these changes, just update your
pythonista_startup.pycontent:
#!python3
import blackmamba as bm
bm.register_default_key_commands()
@zrzka , hey thanks. The git pull worked perfectly. Thanks for your help to get it setup correctly. Makes a huge difference being able to do that. I haven't added my own keys yet, but will put some thought into it. I am always running 'Check Style' and 'Reformat Code' these days so I am guessing, i just need to find these scripts and run them from function stubs like you do with the hallo example. Anyway, will give it a go later.
Thanks again. This is really fantastic with an external keyboard. I am sure a lot of other apps would be envious of this ability.
Oppps, sorry, I missed the post above this...Looks like the wrench keys have been handled. That's great. I will try them now!!!!
Phuket2
@Phuket2 wrench item(s) are not handled yet. It's just silly example how to print smth with keyboard shortcut. I'll try to add run script / wrench item today. Then you'll be able to use it.
@zrzka , ok. Cool. I had a lame attempt to get it working and started going down a rabbit hole. But for me it will be a big help. Esp the check styles/reformat code. @ccc has beaten me with a big stick so I don't dare to push anything anymore until I have done all the checks :) I find it super annoying and time consuming. But I am happy I am staring to take the time to do things properly. Just a matter of time it will become second nature.
Ok, will keep a look out for the next update :)
@Phuket2 I did refactor my picker, thus I was able to add Run Quickly... (Cmd Shift R) and Wrench Quickly... (Cmd Option R).
Butit does work only and only if scripts are Python 3 compatible. Otherwise you can run them, but they will fail to execute. See another thread for more info. Sorry for this, will try to solve it somehow.
It's useless for StaSh (Python 2) and maybe for many more scripts.
Another update ...
wrench_pickerrenamed toaction_picker
Wrench Quickly...renamed toAction Quickly...with new shortcutCmd-Shift-A
ide.run_actionadded (see example below)
slight Action Quickly... UI improvements
title is custom title or just script name without extension if title is not provided
subtitle is script path
... and here's an example how to register custom shortcut to launch StaSh for example ...
#!python3
import blackmamba as bm
from blackmamba.key_commands import register_key_command
from blackmamba.uikit import * # UIKeyModifier*
import blackmamba.ide as ide
bm.register_default_key_commands()
def launch_stash():
ide.run_action('StaSh') # <- editor action custom title, case sensitive
# or ide.run_script('launch_stash.py')
register_key_command('S', UIKeyModifierCommand | UIKeyModifierShift,
launch_stash, 'Launch StaSh...')
ide.run_actionaccepts editor action custom title and it's case sensitive. Another option is toignoreeditor actions and use justide.run_scriptwith script name.zrzka
Another installation method added (StaSh & Pip). Check readme. This is preferred way to install Black Mamba. The old git way still works and will work.
Hmm, StaSh & pip & GitHub doesn't support update. Hmm.
Okay, managed to create PyPI package. So, it's installable via:
cd ~/Documents
pip install blackmamba -d site-packages-3
But there's an issue with XML RPC and PyPI, see issue #264. So far, the workaround is to change line number 899 in the
site-packages/stash/bin/pip.pyfile from ...
hits = self.pypi.package_releases(pkg_name, True) # True to show all versions
... to ...
hits = self.pypi.package_releases(pkg_name, False)
This fixes the pip issue. Or at least, it looks like it does.
I give up smoking last night and changed to vamping instead. Maybe this was not a good week to do that :)
For those who are using
git, feel free to pull:
Basically did add more complex sample pythonista_startup.py (readme) and ability to set which folders are ignored in
Run/Open Quickly...dialogs. Now going to figure out how to publish PyPI package on iPad, left MBP at home for two days :)
0.0.11 released (git & pip):
two shortcuts modified
Ctrl-Shift-Badded for clear annotations & pyflakes (Analyze)
P.S. Did want to use
Cmd-Shift-B(Xcode sync), but it's already used in Pythonista for toggle breakpoint.WARNINGI did release the package via PyPI as well, but StaSh pip doesn't see it. Thinking what should I do with this :)
@zrzka , i am still using git pull. Working great. Ctrl-shift-b working perfectly!
I am going to add an issue to the Pythonista github issues about the hud display delay for 'check style' and 'Analyse'. I feel its twice the time it should be. Depends what @omz thinks. It does not deserve its own param setting in my view (more important things). Can live with what it is now, but seems to me that the hud should be shown for half the time.
@Phuket2 This HUD delay you're talking about, do you mean HUD delay when you do use
Ctrl Shift Bor when you do use Wrench - Analyze / Check Style action item? If you're talking about the first case (viaCtrl Shift B), then the delay is 1.0s. Check analyzer.py, lines 72 & 99. I do not emulate tapping on these items, nor using these action items from my script. I have customanalyzer.pymodule which usespyflakesdirectly. So, it's a completely independent implementation. |
One more doubt has arised, as I get closer to the end of this course. Doing the less complicated version of Similarities was trivial to me; however, I am going through some issues with the outcomes of my helpers.py for the more comfortable version of this problem.
My implementation is as follows:
helpers.py
from enum import Enum
class Operation(Enum):
"""Operations"""
DELETED = 1
INSERTED = 2
SUBSTITUTED = 3
def __str__(self):
return str(self.name.lower())
def distances(a, b):
"""Calculate edit distance from a to b"""
# DOING
# create table of distances, based on the lengths of the strings provided
editDistances = [[tuple((0,None)) for j in range(len(b) + 1)] for i in range(len(a) + 1)]
emptyTuple = tuple((0,None))
# stipulate first values based on conversions from empty strings to a and b
for column in range(1, len(editDistances[0])):
editDistances[0][column] = (column, Operation.INSERTED)
for row in range(1, len(editDistances)):
editDistances[row][0] = (row, Operation.INSERTED)
# iterate through each element, search for empty tuples and replace them with previous answers
for row in range(1, len(editDistances)):
for column in range(1, len(editDistances[0])):
# create set of options, based on the three possible actions and conditioned to the substitution alternatives
if a[row - 1] == b[column - 1]:
options = [(editDistances[row-1][column][0] + 1, Operation.DELETED), (editDistances[row][column - 1][0] + 1, Operation.INSERTED), (editDistances[row - 1][column - 1][0], Operation.SUBSTITUTED)]
else:
options = [(editDistances[row-1][column][0] + 1, Operation.DELETED), (editDistances[row][column - 1][0] + 1, Operation.INSERTED), (editDistances[row - 1][column - 1][0] + 1, Operation.SUBSTITUTED)]
# find the optimal solution
optimalOption = options[0]
if options[1][0] < optimalOption[0]:
optimalOption = options[1]
elif options[2][0] < optimalOption[0]:
optimalOption = options[2]
# insert the optimal solution to the distances table
editDistances[row][column] = optimalOption
# return finished table
return editDistances
check50 returns the following result when checking my implementation:
check50
~/workspace/pset6/similarities/more/ $ check50 cs50/2018/x/similarities/more
Connecting.....
Authenticating......
Preparing............
Uploading...........
Checking.......
:) helpers.py exists
:) helpers.py compiles
:) takes 0 operation to convert "" to ""
:) takes 3 operation to convert "dog" to ""
:) takes 4 operation to convert "" to "dog"
:) takes 1 operation to convert "a" to "b"
:( takes 1 operation to convert "cat" to "coat"
Expected edit distance of 1, not 3
:) takes 1 operation to convert "frog" to "fog"
:) takes 1 operation to convert "year" to "pear"
:) takes 0 operations to convert "today" to "today"
:( takes 5 operations to convert "today" to "yesterday"
Expected edit distance of 5, not 9
:) takes 6 operations to convert "tomorrow" to "today"
:) takes 3 operations to convert "today" to "ToDaY"
See https://cs50.me/checks/7a76bad31ed34bfc6358951f8313ad567096e5a1 for more detail.
~/workspace/pset6/similarities/more/ $
It is my understanding that my code is ignoring the insertion possibilities and going straight away with the replacement. However, I am unable to see which changes could be made in order to mitigate this issue. Can someone share some thoughts? It would be much appreciated.
--
I'd like to seize this opportunity and externalize how fruitful this course has been to me. Even throughout the Python and HTML parts, which elaborate on languages that I already know and have experience on, I have been constantly challenged. This is definitely how every Computer Science bachelor degree should begin!
--
Thank you in advance for the assistance provided.
~imatheussm |
Black Mamba - open quickly and other shortcuts
Hi all,
I mainly work with Pythonista on the biggest iPad with external keyboard. And I miss some features like Open quickly, registering shortcuts for my scripts, ... so I decided to add them by myself. Hope it will be natively supported in the future.
Here's screen recording of what I can do with external keyboard only. If you would like to try it, feel free to clone
allfiles from pythonista-site-packages-3 repository to your localsite-packages-3folder. Just download them or usegitvia StaSH.
Then you can hit ...
Cmd /- to toggle comments
Cmd N- to open new tab and to show new file dialog
Cmd Shift N- to open just new tab
Cmd 0(zero) - to toggle library view (I call it navigator)
Cmd W- to close current tab
Cmd Shift W- to close all tabs except current one
Cmd O(letter o) - to quickly open files
If you need more shortcuts for more actions, just let me know and I'll try to add them.
WARNINGIt works, but it'sexperimental,dangerous. There's some swizzling, some direct calls to ObjC instances, passing guessed parameter values, etc. It can crash your Pythonista, you can lost data, ... I warned you :) If you will modify this code and Pythonista will crash during startup, just openpythonista3://in your browser and fix your changes.
Will write more issues, so, Pythonista modules can be enhanced (like
editormodule functions to close tab, open file, etc.). Then I can remove some of these crazy calls.
Back to open quickly ... It just supports Python (
.py) and Markdown (.md) files. That's because I have no clueyetwhat should I pass as editor type, when force reload should be used, ... Also some directories are excluded, so, you know why if some files are missing in open quickly dialog. Also the open quickly dialog has hidden title bar -> no close button -> you need to hit Esc (or Ctrl [ on smart keyboard) to close it.
Anyway, it's pretty scary, fascinating, ... that we can do all these things directly on iPad. Many thanks to Ole for this wonderful tool. This tool let me play with Python on iPad and I can drive our AWS services directly from iPad as well. I'm not forced to bring MBP everywhere, iPad is enough :) Thanks again.
Enjoy!
zrzka
Phuket2
@zrzka , ok. Enjoy. Thanks again.
wolf71
cool,perfect,thanks.
and wish @omz next pythonista update can support it native.
@zrzka, hey. I know you are off right now, hope you are having a good break. But when you return could you consider having a (dedicated) key combo (cmd-something) that you can apply to invoke a given wrench menu item. Eg. Maybe the user could pass the name of the wrench item name as a param in the bm.start('StaSh'). I guess it could also be a .py filename. Although that seems a bit more on the wild side. Maybe you have some better ideas than that.
I also mention dedicated, because I think it would get infinitely more complicated to let users to map their own keys.
Anyway thanks again, still only a day or so with what you have done so far and its very useful for me with the Apple iPad Pro keyboard.
@Phuket2 thanks for the suggestions.
Open Quickly
I would like to reuse Open Quickly dialog (when I refactor it) for:
Run Quickly (search for just .pyand run it, basically emulation of open and tapping on play),
Wrench Quickly, same idea, but you can search for wrench items.
UI for mapping keys
I'm not going to provide UI for mapping keys, because it's a lot of work, which can be replaced with something more simpler. I can remove HW shortcuts registration from
bm.start, can provide something likebm.register_default_key_commands. And if you don't call this function in yourpythonista_startup.pyfile, feel free to map your own shortcuts viabm.register_key_command. Or call it and add your own afterbm.start().Shortcut for the wrench item
Just do this in your
pythonista_startup.pyfile:
#!python3
import blackmamba.startup as bm
import blackmamba.key_commands as bkc
import blackmamba.uikit as bui
def launch_wrench_item(name):
print('Wrench item: {}'.format(name))
def launch_stash():
launch_wrench_item('StaSh')
bm.start()
bkc.register_key_command(
bkc.PYTHONISTA_SCOPE_EDITOR,
'S',
bui.UIKeyModifierCommand | bui.UIKeyModifierShift,
launch_stash,
'Launch StaSh')
This maps
Cmd-Shift-Stolaunch_stash, where you can do whatever you want :)
Run Quickly (search for just
Some breaking changes pushed. Check Usage section in the readme. Summary:
external_screen.pymoved toblackmamba/experimental
blackmamba/startup.pytrashed
register_default_key_commandsintroduced inblackmamba/__init__.py
removed scope in blackmamba/key_commands.py
usage examples updated
repository renamed (pythonista-site-packages-3->blackmamba)
If you don't want to check all these changes, just update your
pythonista_startup.pycontent:
#!python3
import blackmamba as bm
bm.register_default_key_commands()
@zrzka , hey thanks. The git pull worked perfectly. Thanks for your help to get it setup correctly. Makes a huge difference being able to do that. I haven't added my own keys yet, but will put some thought into it. I am always running 'Check Style' and 'Reformat Code' these days so I am guessing, i just need to find these scripts and run them from function stubs like you do with the hallo example. Anyway, will give it a go later.
Thanks again. This is really fantastic with an external keyboard. I am sure a lot of other apps would be envious of this ability.
Oppps, sorry, I missed the post above this...Looks like the wrench keys have been handled. That's great. I will try them now!!!!
Phuket2
@Phuket2 wrench item(s) are not handled yet. It's just silly example how to print smth with keyboard shortcut. I'll try to add run script / wrench item today. Then you'll be able to use it.
@zrzka , ok. Cool. I had a lame attempt to get it working and started going down a rabbit hole. But for me it will be a big help. Esp the check styles/reformat code. @ccc has beaten me with a big stick so I don't dare to push anything anymore until I have done all the checks :) I find it super annoying and time consuming. But I am happy I am staring to take the time to do things properly. Just a matter of time it will become second nature.
Ok, will keep a look out for the next update :)
@Phuket2 I did refactor my picker, thus I was able to add Run Quickly... (Cmd Shift R) and Wrench Quickly... (Cmd Option R).
Butit does work only and only if scripts are Python 3 compatible. Otherwise you can run them, but they will fail to execute. See another thread for more info. Sorry for this, will try to solve it somehow.
It's useless for StaSh (Python 2) and maybe for many more scripts.
Another update ...
wrench_pickerrenamed toaction_picker
Wrench Quickly...renamed toAction Quickly...with new shortcutCmd-Shift-A
ide.run_actionadded (see example below)
slight Action Quickly... UI improvements
title is custom title or just script name without extension if title is not provided
subtitle is script path
... and here's an example how to register custom shortcut to launch StaSh for example ...
#!python3
import blackmamba as bm
from blackmamba.key_commands import register_key_command
from blackmamba.uikit import * # UIKeyModifier*
import blackmamba.ide as ide
bm.register_default_key_commands()
def launch_stash():
ide.run_action('StaSh') # <- editor action custom title, case sensitive
# or ide.run_script('launch_stash.py')
register_key_command('S', UIKeyModifierCommand | UIKeyModifierShift,
launch_stash, 'Launch StaSh...')
ide.run_actionaccepts editor action custom title and it's case sensitive. Another option is toignoreeditor actions and use justide.run_scriptwith script name.zrzka
Another installation method added (StaSh & Pip). Check readme. This is preferred way to install Black Mamba. The old git way still works and will work.
Hmm, StaSh & pip & GitHub doesn't support update. Hmm.
Okay, managed to create PyPI package. So, it's installable via:
cd ~/Documents
pip install blackmamba -d site-packages-3
But there's an issue with XML RPC and PyPI, see issue #264. So far, the workaround is to change line number 899 in the
site-packages/stash/bin/pip.pyfile from ...
hits = self.pypi.package_releases(pkg_name, True) # True to show all versions
... to ...
hits = self.pypi.package_releases(pkg_name, False)
This fixes the pip issue. Or at least, it looks like it does.
I give up smoking last night and changed to vamping instead. Maybe this was not a good week to do that :)
For those who are using
git, feel free to pull:
Basically did add more complex sample pythonista_startup.py (readme) and ability to set which folders are ignored in
Run/Open Quickly...dialogs. Now going to figure out how to publish PyPI package on iPad, left MBP at home for two days :)
0.0.11 released (git & pip):
two shortcuts modified
Ctrl-Shift-Badded for clear annotations & pyflakes (Analyze)
P.S. Did want to use
Cmd-Shift-B(Xcode sync), but it's already used in Pythonista for toggle breakpoint.WARNINGI did release the package via PyPI as well, but StaSh pip doesn't see it. Thinking what should I do with this :)
@zrzka , i am still using git pull. Working great. Ctrl-shift-b working perfectly!
I am going to add an issue to the Pythonista github issues about the hud display delay for 'check style' and 'Analyse'. I feel its twice the time it should be. Depends what @omz thinks. It does not deserve its own param setting in my view (more important things). Can live with what it is now, but seems to me that the hud should be shown for half the time.
@Phuket2 This HUD delay you're talking about, do you mean HUD delay when you do use
Ctrl Shift Bor when you do use Wrench - Analyze / Check Style action item? If you're talking about the first case (viaCtrl Shift B), then the delay is 1.0s. Check analyzer.py, lines 72 & 99. I do not emulate tapping on these items, nor using these action items from my script. I have customanalyzer.pymodule which usespyflakesdirectly. So, it's a completely independent implementation.
@zrzka , yes i agree. It's different. I was talking about when you select from the wrench menu. But i did think it affected your code also, but I can see now it does not. I guess you felt the same, the time was to long. |
A vida de um desenvolvedor back-end envolve a criação de alguns scripts para serem executados no ambiente de produção. Por exemplo, você pode precisar atualizar muitos registros de uma vez, acionar eventos ou corrigir um bug específico. O que eles têm em comum? Se não forem projetados corretamente, podem causar um efeito colateral muito ruim na aplicação, arruinar a experiência do usuário, etc.
No cenário do mundo real, algumas mudanças são ~praticamente~ impossíveis de reverter:
Acionar e-mails / notificações / mensagens / SMSs para clientes
Fornecer, acidentalmente, um desconto a um usuário (ou muitos)
Atualizar de registros sem backup
Como desenvolvedores, precisamos não apenas verificar novamente nossos scripts, mas também fazer o nosso melhor para minimizar / evitar possíveis efeitos colaterais (especialmente se algo inesperado acontecer). Algumas semanas atrás, durante uma sessão pair programming, [Elias] (https://etandel.xyz) e eu criamos um crítico comando Django responsável por alterar registros em muitas tabelas de acordo com uma lógica de negócios. À medida que mergulhávamos nas partes intrínsecas do script, percebemos o quão perigoso isso poderia ser e tomamos alguns cuidados que são compartilhados neste post.
Barras de progresso são fantásticas!
Executar um script que leva muito tempo para ser concluído é desesperador. Você fica confuso porque não sabe o que está acontecendo: (i) ainda está funcionando? (ii) a conexão está desligada? É por isso que um senso de progresso é importante. Se você é um programador Python, alguns projetos como tqdm e clint podem ajudá-lo fornecendo maneiras de criar barras de progresso. De qualquer forma, se isso exigir muito esforço ou se sua linguagem de programação não ajudar você com isso, um simples indicador <accomplished> / <total> é um bom começo, pelo menos.
Logging
Depois de executar um script, as coisas acontecem:
Você não tem certeza do que foi feito; Alguns dias depois, como você se lembra?
Você não terá 100% de certeza sobre os registros que foram atualizados;
Qualquer reversão exigirá um backup específico;
Além disso, pense no cenário em que seu script tem um bug inesperado ou os registros que você atualiza não estão consistentemente alinhados à lógica de negócios. Quão fácil é reverter os efeitos colaterais?
Toda dor acima mencionada pode ser atenuada se você simplesmente registrar as alterações. Você pode criar um arquivo simples que armazena: (i) a id dos registros que você atualizou; (ii) o valor da coluna anterior; (iii) o novo valor. Dessa forma, se algo inesperado acontecer, você pode facilmente analisar o log, obter os registros que alterou e definir os valores antigos de volta sem ter que carregar um backup.
Auto-verificação
E se o seu script pudesse verificar inconsistências durante a execução? Pouco antes da conclusão, ele pode analisar o log e verificar se os novos registros são consistentes com a lógica de negócios.
Vamos supor, por exemplo, que você precise multiplicar o saldo de vários usuários por um fator. Como você é cauteloso, seu script produz o seguinte registro:
id; antigo; novo 10987; 10; 100 98011; 5; 50 87652; 3; 35
O último registro não está correto porque o novo saldo ultrapassa em 5 o valor esperado (30). Nesse caso, uma exceção pode ser lançada para reverter todas as alterações.
Rollback
Especialmente ao lidar com registros de banco de dados, você deve garantir que uma política de tudo-ou-nada seja seguida: ou todas as alterações são persistidas ou nada é feito. Use uma transação de banco de dados para fazer isso, pois no caso de qualquer erro o rollback será executado.
@transaction.atomic def handle(self, *args, **kwargs): <your_code_goes_here>
Dry-run
Sempre que possível, forneça uma opção dry-run. Dessa forma as alterações não são confirmadas e é possível verificar se há erros no tempo de execução. Se você usa o framework Django, por exemplo, seu comando pode reverter todas as mudanças se dry-run for passado como argumento:
@transaction.atomic def handle(self, *args, **kwargs): dry_run = kwargs['dry_run'] if dry_run: transaction.set_rollback(True)
Tmux
E se sua conexão for perdida durante a execução? Isso pode ser muito ruim, hein? É por isso que é recomendado usar um multiplexador de Terminal como o tmux (dá uma olhada neste tutorial). É realmente útil porque você pode iniciar tarefas de longa execução em seu servidor remoto e mantê-las em execução mesmo que sua conexão seja perdida.
Revisão de código
Cada código que vai para a produção deve ser revisado por outro programador. Os scripts não são exceção. Ponto final. |
XLA (Hızlandırılmış Doğrusal Cebir), potansiyel olarak kaynak kodu değişikliği olmadan TensorFlow modellerini hızlandırabilen, doğrusal cebir için alana özgü bir derleyicidir.
Sonuçlar hız ve bellek kullanımındaki iyileştirmelerdir: örn. XLA kullanılarak 8 Volta V100 GPU kullanan BERT MLPerf gönderiminde ~ 7 kat performans artışı ve ~ 5 kat toplu iş boyutu iyileştirmesi elde edildi:
Giriş
Bir TensorFlow programı çalıştırıldığında, tüm işlemler TensorFlow yürütücüsü tarafından ayrı ayrı yürütülür. Her TensorFlow işlemi, yürütücünün gönderdiği önceden derlenmiş bir GPU çekirdeği uygulamasına sahiptir.
XLA, çalışan modeller için alternatif bir mod sağlar: TensorFlow grafiğini, belirli model için özel olarak oluşturulmuş bir dizi hesaplama çekirdeğinde derler. Bu çekirdekler modele özel olduğundan, optimizasyon için modele özgü bilgilerden yararlanabilirler. Örneğin, XLA'nın basit bir TensorFlow hesaplaması bağlamında yaptığı bir optimizasyona bakalım:
def model_fn(x, y, z):
return tf.reduce_sum(x + y * z)
XLA olmadan çalıştırın, grafik üç çekirdek başlatır: biri çarpma, biri toplama ve diğeri azaltma için. Ancak XLA, sonucu tek bir çekirdek başlatmasıyla hesaplayacak şekilde grafiği optimize edebilir. Bunu, toplama, çarpma ve indirgeme işlemlerini tek bir GPU çekirdeğinde "birleştirerek" yapar. Ayrıca, bu birleştirilmiş işlem, y*z ve x+y*z tarafından üretilen ara değerleri belleğe yazmaz; bunun yerine, bu ara hesaplamaların sonuçlarını doğrudan kullanıcılarına "aktarırken" bunları tamamen GPU kayıtlarında tutuyor. Fusion, XLA'nın en önemli optimizasyonudur. Bellek bant genişliği tipik olarak donanım hızlandırıcılarda en az bulunan kaynaktır, bu nedenle bellek işlemlerini kaldırmak, performansı artırmanın en iyi yollarından biridir.
TensorFlow modelleri için XLA'yı etkinleştirin
tf.function(jit_compile=True) ile açık derleme tf.function(jit_compile=True)
Açık derleme API'si, hangi işlevlerin derlenmesi gerektiğini seçmek için ayrıntılı bir denetim sunar. Örneğin, MNIST eğitimini gerçekleştiren aşağıdaki TensorFlow işlevi XLA ile derlenmiştir:
@tf.function(jit_compile=True)
def train_mnist(images, labels):
images, labels = cast(images, labels)
with tf.GradientTape() as tape:
predicted_labels = layer(images)
loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=predicted_labels, labels=labels
))
layer_variables = layer.trainable_variables
grads = tape.gradient(loss, layer_variables)
optimizer.apply_gradients(zip(grads, layer_variables))
jit_compile API'sinin derlenmesi gereken semantiği vardır: ya işlevin tamamı XLA ile derlenir ya da bir errors.InvalidArgumentError istisnası atılır. XLA şu anda boyutların çıkarılamaz olduğu işlevleri derleyemez : yani, tüm hesaplamayı çalıştırmadan tüm tensörlerin boyutlarını çıkarmak mümkün değilse. Örneğin, aşağıdaki işlev derlenmeyecektir:
@tf.function
def not_compilable(x):
return tf.unique(x)
Şekiller, çalışmalar arasında değişebilir:
@tf.function(jit_compile=True)
def recompiled_on_launch(a, b):
return a + b
recompiled_on_launch(tf.ones([1, 10]), tf.ones([1, 10]))
recompiled_on_launch(tf.ones([1, 100]), tf.ones([1, 100]))
Daha ayrıntılı bir kullanım örneği için eğitici makaleye bakın.
Otomatik kümeleme
XLA'yı TensorFlow modellerinde herhangi bir değişiklik olmadan kullanmaya başlamanın basit bir yolu, XLA kullanılarak derlenebilen ve yürütülebilen TensorFlow işlevleri içindeki kümeleri (bağlı alt grafikler) otomatik olarak bulan otomatik kümelemeyi etkinleştirmektir. GPU'da otomatik kümeleme, TF_XLA_FLAGS ortam değişkeni ayarlanarak etkinleştirilebilir:
$ TF_XLA_FLAGS=--tf_xla_auto_jit=2 path/to/your/tf/program
Otomatik kümeleme şu anda GPU iş yükleri için optimize edilmiştir, ancak ek olarak --tf_xla_cpu_global_jit bayrağı kullanılarak CPU'da da etkinleştirilebilir:
$ TF_XLA_FLAGS="--tf_xla_auto_jit=2 --tf_xla_cpu_global_jit" path/to/your/program
Ayrıntılı bir kullanım örneği için otomatik kümeleme öğretici makalesine bakın .
tfcompile ile CPU için AOT (Ahead-of-time) derlemesi
Ayrıca, TensorFlow grafiğini çalıştırılabilir koda dönüştüren bağımsız bir tfcompile aracı da kullanabilirsiniz (yalnızca x86-64 CPU için).
Derlenmiş programları inceleyin
XLA, oluşturulan programları incelemenizi sağlayan iç gözlem olanakları sağlar. Oluşturulan programları dökmek için, XLA_FLAGS ortam değişkenini XLA_FLAGS :
$ XLA_FLAGS="--xla_dump_to=/tmp/generated" TF_XLA_FLAGS="--tf_xla_auto_jit=2" my/tensorflow/program
Döküm gerçekleştirildikten sonra, aşağıdaki dosyaları /tmp/generated içinde bulabilirsiniz:
module_XXXX.*_optimizations.txtOluşturulan XLA programları , her derlenen küme için bir tane. XLA hata raporlarını gönderirken bunları eklemek son derece yararlıdır!
module_XXXX.ptxOluşturulan PTX dosyaları.
Ayrıca, XLA kümelerinin TensorFlow grafiğinin içine gömülmesini görselleştiren grafiği aşağıdakilerle boşaltabilirsiniz:
$ TF_DUMP_GRAPH_PREFIX=/tmp/generated TF_XLA_FLAGS="--tf_xla_clustering_debug"
Tekrarlanabilir hata raporları
Oluşturulan XLA programları için dökümler ve kullanılan otomatik kümeleme yerleştirme içeriyorsa, bir hata raporunun yeniden oluşturulması çok daha kolaydır. Otomatik kümeleme ile çalışan bir TensorFlow programı için bunları oluşturmak üzere şunu başlatın:
$ TF_DUMP_GRAPH_PREFIX=/tmp/generated \
TF_XLA_FLAGS="--tf_xla_clustering_debug --tf_xla_auto_jit=2" \
XLA_FLAGS="--xla_dump_hlo_as_text --xla_dump_to=/tmp/generated" \
my/tensorflow/program"
Hataları doldururken, /tmp/generated dizinin içeriğini ekleyin (yukarıda atıfta bulunulmuştur).
Mümkünse, replay_computation kullanarak ve oluşturulan programlarda yinelemeli olarak çalıştırarak bir hatayı tek bir XLA programında izole etmeye çalışın.
daha fazla okuma
XLA ile ilgili bilinen sorunların listesi
XLA Mimarisi : XLA mimarisine genel bakış
XLA - TensorFlow, Derlendi : Google Developers Blogunda Okuyun
Github'daki XLA kaynağına göz atın!
XLA Ön Uçları
TensorFlow'dan ayrı olarak, XLA programları şu şekilde oluşturulabilir: |
AzureのWindowsServerでVOICEROIDを動かします
前回の続きです。
あらすじ
前回はPythonからWin32APIをバシバシ叩いてきりたん好きなコトを喋らせることができるようになったのでした。
今回は、クラウドサービス上(というか適当なサーバ)できりたんに働いてもらおうと思います。
Microsoft Azure
Microsoft AzureのVirtual Machinesでは、WindowsServerのインスタンスを立ち上げることが出来ます。 そのへんのWindows VPSサービスを比べるとちょっと割高な感じはします。(それはそう)
なんかAzureではWindows Client(普通のWindows 10とか)も使えるっぽいんですが、 MSDNサブスクリプションが必要とかでアレなので、今回はWindowsServerで行きます。
ぶっちゃけKVMベースのVPSサービスならWindowsが普通にインストールできるような気がしますが、 契約関係で怖い人から怒られるので避けるのが無難です。
学生なら、Microsoft Imagine(旧Dreamspark)からWindows Sever 2016のライセンスがタダでもらえるので、コレを使って自宅鯖を建てるのもアリかと思います。1
インスタンス作成
Azure Portalから適当にインスタンスを作ります。 インスタンスのサイズですが、メモリ0.75GBだとかなり厳しさがあるので1.75GBが最低ラインな感じがあります。
あと、デフォルトだと鬼高いサイズしか表示されなくて焦るんですが、 サポートされるディスクの種類をSSDにして、全て表示を押すとお手頃サイズが出てきます。
ボクは無料試用クレジットを使い切るために強めのインスタンスを建てました。
インスタンスを作成したときに勝手にネットワークセキュリティグループというリソースが作成されており、 この設定を変えてポートを開けないとリモートデスクトップ接続(RDP)ができません。
対象のネットワークセキュリティグループを開いて、受信セキュリティ規則 → 追加で設定画面を開き、 サービスからRDPを選択して許可します。
このあとHTTPも使うので、ついでにHTTPを許可する設定も追加しておきましょう。 先程と同様にして設定画面を開き、サービスからHTTPを選択して許可します。
サーバ設定
WindowsServerを使う場合は、いろいろ設定が必要になります。 普通のWindowsを使う場合は不要なものも多いので、軽く目を通す程度で。
RDP
リモートデスクトップ接続(RDP)を使ったほうが色々便利なので、そうします。 Azureだと、勝手にONになっているのでこの設定は不要です。
サーバマネージャを起動して、ローカルサーバ → リモートデスクトップ → このコンピュータへのリモート接続を許可するにチェックを入れてOKを押します。
.NET Framework
普通のWindowsだと必要になったときにダイアログが出てきて簡単にインストールできますが、 WindowsServerだとそうはいきません。
サーバマネージャを起動して、管理 → 役割と機能の追加 → .NET Framework 3.5 Featuresにチェックを入れてインストールします。
ファイアウォール
このあとHTTPをきりたんと通信するインタフェースとして使うので、80/tcpを開放します。
サーバマネージャを起動して、ローカルサーバ → Windowsファイアウォール → 詳細設定 → 受信の規則 → 新しい規則 で出てくるダイアログに従って、80番ポートを開放します。
IEの制限解除
WindowsServerではデフォルトでIEが機能制限されているので、解除します。 この後Pythonをインストールしたりするときに問題があるためです。
サーバマネージャを起動して、ローカルサーバ → IEセキュティ強化の構成 → Administratorsグループ → オフにチェックを入れてOKを押します。 今回はいろいろラクをするためにAdministratorで進めていきますが、一般ユーザで行う場合はUsersグループのセキュティ強化の構成をオフにしてください。
VOICEROIDのインストール
普通にインストーラからインストールできます。 1ライセンスで1PCにしかインストール出来ないので、注意しましょう。
ライセンス認証
サーバ起動後、一度でもRDPで接続しているとライセンス認証に失敗するようになります。 多分、ライセンス認証ドライバ(Sentinel LDK License Manager)がRDPを検知して爆発してるからです。 RDPで繋ぐような環境で使うな!!!ってことっぽいのでちょっとグレーかもしれません……
回避策として、RDPで繋ぐ前にきりたんを起動してしまいます。
起動時に自動ログインさせる
ログイン時にきりたんを自動起動する
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogonに以下の3つのキーを作成します。
AutoAdminLogon
DWORDで値は1
DWORDで値は
DefaultUserName
ログインするユーザ名
DefaultPassword
ログインユーザのパスワード
これで、起動時に勝手にログインするようになります。
次に、スタートメニューを出してgpedit.mscを入力してエンターし、 ユーザの構成 → Windowsの設定 → スクリプト → ログオンを開き、 VOICEROIDのEXEファイルを指定します。
これで、ログイン後に自動でVOICEROIDが起動します。
これによって、サーバ起動時に勝手にログインしてきりたんが立ち上がります。 かなり筋肉ソリューション感が強いですが、仕方がない><
起動時のエラー
Azure上だとサウンドデバイスがないので、起動時にエラーが表示され、再生ボタンが押せなくなります。 音声保存はできるので、今回は問題ありません。
PythonでVOICEROIDをサーバ化
前回PythonからVOICEROIDを操作できるようになったので、 あとはHTTPからリクエストを受けて音声ファイルを返すようにするだけです。
サーバへPythonとFFMPEGをインストールしておきましょう。 GUIがあるので普通にやるだけです。かんたん。
やりました
方針が定まったら書くだけ…… flaskを使って書きました。
VOICEROID操作のコードは前回の記事を参照してください。
ffmpegを使っているので、別途用意が必要です。 必要なPythonのライブラリはpypiwin32とflaskです
pip install pypiwin32 flask
コード
# coding: UTF-8
import flask
import subprocess
app = flask.Flask(__name__)
@app.route('/', methods=['GET', 'POST'])
def get():
r = flask.request
text = r.form['text'] if r.method == "POST" else r.args.get('text', None)
if text == None:
return 'plz specify `text`'
completed = subprocess.run(
['python', 'talk.py', text],
encoding='ascii',
stdout=subprocess.PIPE,
timeout=30
)
return flask.send_from_directory('./', completed.stdout.strip())
if __name__ == '__main__':
app.debug = True
app.run(host='0.0.0.0', port=80)
注意
一度適当なテキストを読み上げさせ、スクリプトを実行するディレクトリに保存させておく必要があります。 保存先ダイアログを操作するときに、保存先ディレクトリを変更せずに保存させるため、 スクリプトの実行ディレクトリと同じところがデフォルトになっていないと以後の処理が失敗します。
手抜きです……
次回予告
ということで、HTTPで好きなテキストをVOICEROIDに送って読み上げたWAVを得ることができるようになりました。 コレでブラウザさえあればきりたんボイスが聴けてしまうわけです。ああ^~きりたんかわいい!!!!
次回は、コイツを更に改造してきりたんボイスをライブストリーミングしてみる実験です。
Microsoft Imagineは終了したらしいです。↩ |
A while ago I developed a backend project that used a shared instance of Postgres. If you use Flask, as I did, probably your migration layer is dealt by Alembic and the ORM of choice is SQLAlchemy. Due to architectural constraints, the project used a different schema (public was not available). After the first migration, any change in the model was not identified by Alembic and all tables were generated again.
The scenario
In order to address a new database schema I specified table arguments in the models as illustrated by the example below.
class User(db.Model): __tablename__ = "project_users" __table_args__ = ({"schema": "users"},) id = db.Column(db.Integer, primary_key=True) email = db.Column(db.String(100), unique=True)
However, that is just one thing that must be done. The other is to properly configure Alembic to watch the new schema.
The solution
It was unclear to me what was happening but this issue in StackOverflow clarified. To sum up:
(1) It is necessary to allow Alembic to scan all schemas in database. It is done by setting EnvironmentContext.configure.include_schemas in the configuration context. Thus, the database dialect (Postgres in this scenario) executes the query below to retrieve the schemas:
SELECT nspname FROM pg_namespace WHERE nspname NOT LIKE 'pg_%' ORDER BY nspname
(2) The query above brings the schemas but we are interested in the one our application uses. By setting EnvironmentContext.configure.include_object, we can specify a callable responsible for filtering which database objects should be considered.
Code Snippet
After running the init command, migrations/env.py is generated. Since it specifies the configuration object, we will need to modify it a little bit. The code bellow illustrates that.
# ... def include_object(object, name, type_, reflected, compare_to): if hasattr(object, "schema"): return object.schema == target_metadata.schema return object.table.schema == target_metadata.schema def run_migrations_offline(): """Run migrations in 'offline' mode. This configures the context with just a URL and not an Engine, though an Engine is acceptable here as well. By skipping the Engine creation we don't even need a DBAPI to be available. Calls to context.execute() here emit the given string to the script output. """ url = config.get_main_option("sqlalchemy.url") context.configure( url=url, target_metadata=target_metadata, literal_binds=True, version_table_schema=target_metadata.schema, include_schemas=True, include_object=include_object, ) with context.begin_transaction(): context.run_migrations() def run_migrations_online(): """Run migrations in 'online' mode. In this scenario we need to create an Engine and associate a connection with the context. """ # this callback is used to prevent an auto-migration from being generated # when there are no changes to the schema # reference: http://alembic.zzzcomputing.com/en/latest/cookbook.html def process_revision_directives(context, revision, directives): if getattr(config.cmd_opts, 'autogenerate', False): script = directives[0] if script.upgrade_ops.is_empty(): directives[:] = [] logger.info('No changes in schema detected.') connectable = engine_from_config( config.get_section(config.config_ini_section), prefix='sqlalchemy.', poolclass=pool.NullPool, ) with connectable.connect() as connection: context.configure( connection=connection, target_metadata=target_metadata, version_table_schema=target_metadata.schema, include_schemas=True, process_revision_directives=process_revision_directives, include_object=include_object, **current_app.extensions['migrate'].configure_args ) with context.begin_transaction(): context.run_migrations() # ...
Lines 21 and 55setinclude_schema=True.
Lines 22 and 57pass a callableinclude_objectthat corresponds to the function at 4th line.
Line 3is our callable that specifies whether Alembic should consider or not the object in question. Pay attention that in the 5th line we check whether the object has a schema attribute. Finally lines 6 and 7 compare the schema with the one configured in the SQLAlchemy models. |
Has serious disability
Social Security Boolean MONTH Person Formula Included used 1 time
Value typeBoolean.Default valuefalseEntityperson
How is this calculated?
To calculate this variable, the following input is used
Boolean social_security__child_with_serious_disability Child has serious disability
Int social_security__medical_certification_months Number of future months the disability is expected to last for, in months
Boolean social_security__requires_constant_care_and_attention Requires constant care and attention
Where is this used?
This variable is referred to by these other variables in their own calculations
Boolean disability_allowance__family_has_eligible_child Does the family have a child who meets the criteria for disabled
Formulas
This is the formula used to calculate the value of social_security__child_meets_child_disability_allowance_criteria
0001-01-01
This formula is used for scenarios from the date 0001-01-01 onwards. More info on formulas
def formula(persons, period, parameters):
med_cert_required_months = parameters(period).entitlements.social_security.child_disability_allowance.medical_certification_required_months
return persons('social_security__child_with_serious_disability', period) * \
persons('social_security__requires_constant_care_and_attention', period) * \
(persons('social_security__medical_certification_months', period) >= med_cert_required_months)
|
This system, known as Didehban, is in charge of monitoring clients inside the network and if desired can provide tracking and reporting features.
View Product
With this system without the knowledge of how maintaining a Linux server, you can manage and configure a DNS server.
View Product
Shabakehnama is the name of the network monitoring and monitoring software that integrates all monitoring and monitoring processes.
View Product
Using environmental sensors such as temperature, humidity, water leakage, smoke, and its integration with environmental conditions, monitor your data center environment automatically.
View Product
Postchi is a powerful server-based mail server based on the Linux operating system, all of which are managed by the Web server.
View Product
Undoubtedly, the claim for the implementation of Iran wide enterprise systems is not simply achievable. but The Kayer design and programming team, with many years of experience, has carefully reviewed and analyzed all stages of implementation And executes all design and implementation steps in a way that imposes the least amount of processing on the running servers. instant processing hundreds of thousands of data per minute, storing and storing them in the database for several years, and preparing them for allegedly reported reporting that will not easily be obtained. However, experts from The Kayer have been able to overcome these problems and the produce a software that is able to respond well in the enterprise environments.
The use of The Kayer systems in government agencies and enterprise networks has made the security of designed systems more and more necessary. In most cases, an answer to software security concerns is to conduct intrusion attempts to discover vulnerabilities and then resolve them by providing security patches. Security testing and penetration testing is just beginning to evaluate software from the outside world, while security needs to be embedded within the software. Therefore, using the experienced programmers and consultants in the field and passing the security tests by the relevant organizations, the company has been able to increase the security of its systems to meet the security requirements of the organizations.
One of the strengths of Kayer is the flexibility of the company\'s systems in accordance with customer requirements, compared to other Iranian and even foreign companies operating in these areas. Generally, 90% of its customers after purchasing the software are applying for new features or customizations and will automate most of their long-term human activities using the software. The Kayeris proud to announce its readiness for customization at all levels of software.
The prominent feature of a good and efficient software is the upgrading of software features and capabilities, tailored to the advancement of technology and consumer needs. Unfortunately, most of the foreign software systems today do not provide any support and update services to Iranian users. Experienced executives want to ensure that when buying new software, a team is reviewing their new needs and providing it for use. Research and statistics show that providing after-sales services is much more important than selling software itself.
from functools import update_wrapper
class MethodDecoratorDescriptor(object):
def __init__(self, func, decorator):
self.func = func
self.decorator = decorator
def __get__(self, obj, type=None):
return self.decorator(self.func.__get__(obj, object))
def method_decorator(decorator):
def decorate(f):
return MethodDecoratorDescriptor(f, decorator)
return decorate
def spam(f):
def wrapper(value):
return f(value) + ":spamed"
return decorate
class MyClass(object):
@method_decorator(spam)
def my(self, value):
return value
foo = MyClass()
print foo .my(":spamed")
#!/usr/local/bin/php -q
<?php
set_time_limit(0);
@ob_end_flush();
ob_implicit_flush(true);
class prompt {
var $tty;
function prompt() {
if (substr(PHP_OS, 0, 3) == "WIN") {
$this->tty = fOpen("\con", "rb");
} else {
if (!($this->tty = fOpen("/dev/tty", "r"))) {
$this->tty = fOpen("php://stdin", "r");
}
}
}
function get($string, $length = 1024) {
echo $string;
$result = trim(fGets($this->tty, $length));
echo "\n";
return $result;
}
}
echo "Enter something or 'exit' to quit\n";
do {
$cmdline = new prompt();
$buffer = $cmdline->get("Something: ");
echo "You said: $buffer\n";
} while ($buffer !== "exit");
echo "Goodbye\n";
?>
By using kayer's API, The monitoring system (Shabakehnama) will send requests and alerts to our API and API will create auto-generated tickets and requests
Read More
By using kayer's API, The monitoring system (Shabakehnama) will send requests and alerts to our API and API will create auto-generated tickets and requests
Read More
By using kayer'|s API, The monitoring system (Shabakehnama) will send requests and alerts to our API and API will create auto-generated tickets and requests
Read More
By using kayer'|s API, The monitoring system (Shabakehnama) will send requests and alerts to our API and API will create auto-generated tickets and requests
Read More
A Short video of Matrix report module process. to answer the question "| How the Matrix Release Module works"|
Read More |
I have to maintain a big bunch of repositories with different languages and different language versions.
After some iterations, I came up with a simple idea by using docker for it.To ease up things for any kind of users who have to deal with this code (even qa), the last iteration is to add a "startdockercontainer.sh" script into the repository.
Following an example script for a php repository.
Given is, that the script is located in <project_root>/bin.
Given is, that the docker file exists in <project_root>/data/docker.
#!/bin/bash
####
# Starts a fitting container and creates image if needed.
#
# @todo
####
# @author stev leibelt <artodeto@bazzline.net>
# @since 2018-05-09
####
PATH_OF_THIS_SCRIPT=$(cd $(dirname "$0"); pwd)
DOCKER_IMAGE_NAME='my_php_application'
DOCKER_IMAGE_TAG='0.1.0'
if ! (docker image ls | grep -q "${DOCKER_IMAGE_NAME}\s\+${DOCKER_IMAGE_TAG}")
then
PATH_TO_THE_DOCKER_SOURCE=$(realpath ${PATH_OF_THIS_SCRIPT}/../data/docker)
echo ":: We have to build the docker container first."
echo ":: Please do the following steps first."
#this is usefull since you have to copy some ssh keys to a path
# or configure some files.
read -p ":: Hit <ENTER> to continue."
docker build -t ${DOCKER_IMAGE_NAME}:${DOCKER_IMAGE_TAG} ${PATH_TO_THE_DOCKER_SOURCE}
fi
docker container run --mount type=bind,source="${PATH_OF_THIS_SCRIPT}"/..,target=/application -it ${DOCKER_IMAGE_NAME}:${DOCKER_IMAGE_TAG} /bin/ash
And thats it. If the image is not found on the host, we have to setup things and build the image.Afterwards we start the container and mount the repository code into /applicationof the container.
You want to start blockify-ui or blockify on a pulseaudio served arch linux and get the following error?
amixer: Mixer attach default error: No such file or directory
Traceback (most recent call last):
File "/usr/bin/blockify-ui", line 11, in <module>
load_entry_point('blockify==3.6.3', 'gui_scripts', 'blockify-ui')()
File "/usr/lib/python3.6/site-packages/blockify/gui.py", line 972, in main
_cli = cli.initialize(__doc__)
File "/usr/lib/python3.6/site-packages/blockify/cli.py", line 597, in initialize
cli = Blockify(_blocklist)
File "/usr/lib/python3.6/site-packages/blockify/cli.py", line 63, in __init__
self.channels = self.initialize_channels()
File "/usr/lib/python3.6/site-packages/blockify/cli.py", line 184, in initialize_channels
amixer_output = subprocess.check_output("amixer")
File "/usr/lib/python3.6/subprocess.py", line 336, in check_output
**kwargs).stdout
File "/usr/lib/python3.6/subprocess.py", line 418, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command 'amixer' returned non-zero exit status 1.
Try to call "amixer" first and check the output. The chance is high that it will be something like the follow:
amixer: Mixer attach default error: No such file or directory
How to fix this?
Install the following tools:* extra/pulseaudio-alsa* extra/alsa-utils* extra/alsa-plugins* extra/alsa-lib
After that, amixer should output something meaningful and blockify should work as expected.
We detected a bug that is known since 2003.
The value of an auto increment column is set back to zero when you are having an empty table and restarting the MySQL DBMS.We run into this issue by using the auto increment value as a history id into a second table.
How can you work around this issue?
The easiest way is to order by id descending on the second table or to setup a "start up" shell script that calculates and sets the auto increment value.
You have a Debian 8 installation and get an error like the following when you want to install or update the owncloud client?
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://download.opensuse.org Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 4ABE1AC7557BEFF9
W: Failed to fetch http://download.opensuse.org/repositories/isv:/ownCloud:/desktop/Debian_8.0/Release
W: Some index files failed to download. They have been ignored, or old ones used instead.
Execute the following command and try it again.
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 4ABE1AC7557BEFF9
|
This page describes how you can create new bots that receive, process, and respond to events from Google Chat (formerly Hangouts Chat):
Receive messages and other kinds of events generated by Google Chat
Send event responses and other messages into Google Chat
Endpoint types
Events from Google Chat are delivered to your bot via an endpoint, of whichthere are different types:
HTTP endpointspresent your bot as a web service. You'll need to set up a web server to use as an interface for your bot's implementation. Your bot can respond synchronously or asynchronously to these events.
Google Cloud Pub/Sub endpointsuse a topic on Google Cloud Pub/Sub to relay an event to your bot's implementation. This is useful when your implementation is behind a firewall. Bots that use pub/sub endpoints can only respond asynchronously.
DialogFlow endpointslet your bot utilize the natural language processing (NLP) capabilities of DialogFlow. Please see DialogFlow documentation for details.
For a simple, straightforward bot architecture, try implementing a bot using an HTTP endpoint (a web service, essentially) that responds synchronously, always enclosing its payload in the HTTP POST response. This approach does not involve authorization, so it doesn't need a service account. See the simple bot implementation section below for an example of this style of bot.
You may need to take a more complex approach if your bot is behind a firewall or sends unsolicited messages such as alarms or other notifications to Google Chat.
tl;dr... A very simple bot implementation
The following code implements a simple bot in Python using the Flask web framework.
#!/usr/bin/env python3
"""Example bot that returns a synchronous response."""
from flask import Flask, request, json
app = Flask(__name__)
@app.route('/', methods=['POST'])
def on_event():
"""Handles an event from Google Chat."""
event = request.get_json()
if event['type'] == 'ADDED_TO_SPACE' and not event['space']['singleUserBotDm']:
text = 'Thanks for adding me to "%s"!' % (event['space']['displayName'] if event['space']['displayName'] else 'this chat')
elif event['type'] == 'MESSAGE':
text = 'You said: `%s`' % event['message']['text']
else:
return
return json.jsonify({'text': text})
if __name__ == '__main__':
app.run(port=8080, debug=True)
Because it's a web service, the bot presents an HTTP endpoint and doesn't need to use Cloud Pub/Sub to relay events to it. And because it always returns its response payload within the JSON response, it doesn't need to authenticate using a service account.
Handling events from Hangouts Chat
This section describes how to receive and process events that your bot receives from Google Chat.
Registering the bot
Before your bot can receive events from Google Chat, you must specify its endpoint in the Chat API configuration tab when you publish your bot.
Once you've registered the endpoint and published your bot, Google Chat will recognize events addressed to your bot and dispatch them to the specified endpoint.
Verifying bot authenticity
Once you've registered your HTTP bot, you need a way for your implementation to verify that the request is actually coming from Google.
Google Chat includes a bearer token in the Authorization header of every HTTP Request to a bot. For example:
POST
Host: yourboturl.com
Authorization: Bearer AbCdEf123456
Content-Type: application/json
User-Agent: Google-Dynamite
The string AbCdEf123456 in the example above is the bearer authorization token.This is a cryptographic token produced by Google. You can verify your bearer token usingan open source Google API client library:
Java: https://github.com/google/google-api-java-client
Python: https://github.com/google/google-api-python-client
.NET: https://github.com/google/google-api-dotnet-client
All bearer tokens sent with requests from Google chat will havechat@system.gserviceaccount.com as the issuee, with the audience field specifying the target bot's project number from theGoogle API Console.For example, if the request is for a bot with the project number 1234567890, then theaudience is 1234567890.
You should verify that the request is coming from Googleand is intended for the target bot. If the token doesn't verify, the bot shouldrespond to the request with an HTTP response code 401 (Unauthorized).
Java
import java.io.IOException;
import java.security.GeneralSecurityException;
import java.util.Collections;
import com.google.api.client.googleapis.auth.oauth2.GoogleIdToken;
import com.google.api.client.googleapis.auth.oauth2.GoogleIdTokenVerifier;
import com.google.api.client.googleapis.auth.oauth2.GooglePublicKeysManager;
import com.google.api.client.http.apache.ApacheHttpTransport;
import com.google.api.client.json.JsonFactory;
import com.google.api.client.json.jackson.JacksonFactory;
/** Tool for verifying JWT Tokens for Bots in Google Chat. */
public class JWTVerify {
// Bearer Tokens received by bots will always specify this issuer.
static String CHAT_ISSUER = "chat@system.gserviceaccount.com";
// Url to obtain the public certificate for the issuer.
static String PUBLIC_CERT_URL_PREFIX =
"https://www.googleapis.com/service_accounts/v1/metadata/x509/";
// Intended audience of the token, which will be the project number of the bot.
static String AUDIENCE = "1234567890";
// Get this value from the request's Authorization HTTP header.
// For example, for "Authorization: Bearer AbCdEf123456" use "AbCdEf123456"
static String BEARER_TOKEN = "AbCdEf123456";
public static void main(String[] args) throws GeneralSecurityException, IOException {
JsonFactory factory = new JacksonFactory();
GooglePublicKeysManager.Builder keyManagerBuilder =
new GooglePublicKeysManager.Builder(new ApacheHttpTransport(), factory);
String certUrl = PUBLIC_CERT_URL_PREFIX + CHAT_ISSUER;
keyManagerBuilder.setPublicCertsEncodedUrl(certUrl);
GoogleIdTokenVerifier.Builder verifierBuilder =
new GoogleIdTokenVerifier.Builder(keyManagerBuilder.build());
verifierBuilder.setIssuer(CHAT_ISSUER);
GoogleIdTokenVerifier verifier = verifierBuilder.build();
GoogleIdToken idToken = GoogleIdToken.parse(factory, BEARER_TOKEN);
if (idToken == null) {
System.out.println("Token cannot be parsed");
System.exit(-1);
}
// Verify valid token, signed by CHAT_ISSUER.
if (!verifier.verify(idToken)
|| !idToken.verifyAudience(Collections.singletonList(AUDIENCE))
|| !idToken.verifyIssuer(CHAT_ISSUER)) {
System.out.println("Invalid token");
System.exit(-1);
}
// Token originates from Google and is targeted to a specific client.
System.out.println("The token is valid");
}
}
Python
import sys
from oauth2client import client
# Bearer Tokens received by bots will always specify this issuer.
CHAT_ISSUER = 'chat@system.gserviceaccount.com'
# Url to obtain the public certificate for the issuer.
PUBLIC_CERT_URL_PREFIX = 'https://www.googleapis.com/service_accounts/v1/metadata/x509/'
# Intended audience of the token, which will be the project number of the bot.
AUDIENCE = '1234567890'
# Get this value from the request's Authorization HTTP header.
# For example, for 'Authorization: Bearer AbCdEf123456' use 'AbCdEf123456'.
BEARER_TOKEN = 'AbCdEf123456'
try:
# Verify valid token, signed by CHAT_ISSUER, intended for a third party.
token = client.verify_id_token(
BEARER_TOKEN, AUDIENCE, cert_uri=PUBLIC_CERT_URL_PREFIX + CHAT_ISSUER)
if token['iss'] != CHAT_ISSUER:
sys.exit('Invalid issuee')
except:
sys.exit('Invalid token')
# Token originates from Google and is targeted to a specific client.
print 'The token is valid'
Event payload
When your bot receives an event from Google Chat, the event includes arequest body: this is the JSON payload that represents the event. The requestbody always includes the following information:
type: A string that specifies the type of the event.
eventTime: A string containing the event timestamp.
Additional information contained in the request body depends on the event type. The following example shows a possible payload:
{
"type": "MESSAGE",
"eventTime": "2017-03-02T19:02:59.910959Z",
"space": {
"name": "spaces/AAAAAAAAAAA",
"displayName": "Best Dogs Discussion Room",
"type": "ROOM"
},
"message": {
"name": "spaces/AAAAAAAAAAA/messages/CCCCCCCCCCC",
"sender": {
"name": "users/12345678901234567890",
"displayName": "Chris Corgi",
"avatarUrl": "https://lh3.googleusercontent.com/.../photo.jpg",
"email": "chriscorgi@example.com"
},
"createTime": "2017-03-02T19:02:59.910959Z",
"text": "I mean is there any good reason their legs should be longer?",
"thread": {
"name": "spaces/AAAAAAAAAAA/threads/BBBBBBBBBBB"
}
}
}
See the event formats reference for details of the different event types and their request formats.
Processing the event
When your bot receives an event from Google Chat, what it does with that event is completely implementation dependent. The bot may look up some information from a data source, record the event information, or just about anything else. This processing behavior is essentially what defines the bot.
In most cases, a bot will not only process the information contained in the event, but will generate a response back to the thread that issued the event. The following diagram describes a typical interaction with a bot in a chat room:
There are three kinds of events shown in the above diagram: ADDED_TO_SPACE,MESSAGE, and REMOVED_FROM_SPACE. A bot can't respond after being removedfrom a room, but it can respond to the other two types.
Responding synchronously
A bot can respond to an event synchronously by returning a JSON-formatted message payload in the HTTP response. The deadline for a synchronous response is 30 seconds.
A synchronous response from a bot is always posted in the thread that generated the event to the bot.
Responding asynchronously
If a bot needs to respond to a user message beyond the 30-second deadline (for example, it may need to report back after completing a long-running task), it can respond asynchronously. This is exactly like sending a spontaneous message as described in the into an existing thread section.
Lightweight bots that don't use service accounts cannot respond asynchronously.
Retry
If an HTTP request to your bot fails (e.g. timeout, temporary network failure, or a non-2xx HTTP status code), Google Chat will additionally retry delivery twice, with at least a ten-second delay between each retry. As a result, a bot may receive the same message up to three times in certain situations. No retry is attempted if the request completes successfully but returns an invalid message payload.
Bot-initiated messages
This section describes how bots can send arbitrary messages into a space.
Many bots send messages only in direct response to an event that they receive from Google Chat. However, some bots might send messages when triggered by other things, for example:
A time-based alarm like a calendar event
A change in state of some relevant data
The completion of a remote process
This section describes how to send these messages from your app to Google Chat.
Into an existing thread
To send a message as a reply in an existing thread, specify the thread's ID in the message payload as shown below:
{ "text": "...", "thread": { "name": "spaces/SPACE_ID/threads/THREAD_ID" } }
The specific THREAD_ID is available in the payload of MESSAGEevents that your bot receives from Google Chat. Keep track of this ID so thatthe bot can inject messages into the thread.
As a new thread
To send a message into Google Chat as a new thread, your bot should omit the thread ID, as shown below:
https://chat.googleapis.com/v1/spaces/SPACE_ID/messages
Requests must specify Content-Type: application/json in the request header.See the Google Chat API Message Formatreference for the JSON format of Google Chat messages.The following example shows a simple request using cURL:
curl -X POST \
-H 'Content-Type: application/json' \
'https://chat.googleapis.com/....' \
-d '{"text": "Hello!"}'
Thread key
In many cases, bots may want to post multiple messages related to the same entity into the same thread. For example, a bug tracker integration may want to post all notification messages related to the same bug into the same thread.
To achieve this, bots can specify an arbitrary thread key in each request. Messages posted with the same thread key will be grouped into the same thread. For example, the example bug tracker integration above might use the bug ID as part of a consistent thread key. The first notification message for a bug will then create a new thread; all subsequent messages for the same bug will be posted into that same thread.
The thread key is specified in the threadKey query parameter in aninbound HTTP request. For instance:
https://chat.googleapis.com/v1/spaces/SPACE_ID/messages?\ threadKey=ARBITRARY_STRING
Thread keys are also scoped to a specific bot; if two different bots happen to both post messages using the same thread key, those two messages will not be grouped into the same thread. |
Devel::Peek - A data debugging tool for the XS programmer
use Devel::Peek;
Dump( $a );
Dump( $a, 5 );
DumpArray( 5, $a, $b, ... );
mstat "Point 5";
use Devel::Peek ':opd=st';
Devel::Peek contains functions which allows raw Perl datatypes to be manipulated from a Perl script. This is used by those who do XS programming to check that the data they are sending from C to Perl looks as they think it should look. The trick, then, is to know what the raw datatype is supposed to look like when it gets to Perl. This document offers some tips and hints to describe good and bad raw data.
It is very possible that this document will fall far short of being useful to the casual reader. The reader is expected to understand the material in the first few sections of perlguts.
Devel::Peek supplies a Dump() function which can dump a raw Perl datatype, and mstat("marker") function to report on memory usage (if perl is compiled with corresponding option). The function DeadCode() provides statistics on the data "frozen" into inactive CV. Devel::Peek also supplies SvREFCNT(), SvREFCNT_inc(), and SvREFCNT_dec() which can query, increment, and decrement reference counts on SVs. This document will take a passive, and safe, approach to data debugging and for that it will describe only the Dump() function.
Function DumpArray() allows dumping of multiple values (useful when you need to analyze returns of functions).
The global variable $Devel::Peek::pv_limit can be set to limit the number of character printed in various string values. Setting it to 0 means no limit.
If use Devel::Peek directive has a :opd=FLAGS argument, this switches on debugging of opcode dispatch. FLAGS should be a combination of s, t, and P (see -D flags in perlrun). :opd is a shortcut for :opd=st.
CvGV($cv) return one of the globs associated to a subroutine reference $cv.
debug_flags() returns a string representation of $^D (similar to what is allowed for -D flag). When called with a numeric argument, sets $^D to the corresponding value. When called with an argument of the form "flags-flags", set on/off bits of $^D corresponding to letters before/after -. (The returned value is for $^D before the modification.)
runops_debug() returns true if the current opcode dispatcher is the debugging one. When called with an argument, switches to debugging or non-debugging dispatcher depending on the argument (active for newly-entered subs/etc only). (The returned value is for the dispatcher before the modification.)
When perl is compiled with support for memory footprint debugging (default with Perl's malloc()), Devel::Peek provides an access to this API.
Use mstat() function to emit a memory state statistic to the terminal. For more information on the format of output of mstat() see "Using $ENV{PERL_DEBUG_MSTATS}" in perldebguts.
Three additional functions allow access to this statistic from Perl. First, use mstats_fillhash(%hash) to get the information contained in the output of mstat() into %hash. The field of this hash are
minbucket nbuckets sbrk_good sbrk_slack sbrked_remains sbrks start_slacktopbucket topbucket_ev topbucket_odd total total_chain total_sbrk totfree
Two additional fields free, used contain array references which provide per-bucket count of free and used chunks. Two other fields mem_size, available_size contain array references which provide the information about the allocated size and usable size of chunks in each bucket. Again, see "Using $ENV{PERL_DEBUG_MSTATS}" in perldebguts for details.
Keep in mind that only the first several "odd-numbered" buckets are used, so the information on size of the "odd-numbered" buckets which are not used is probably meaningless.
The information in
mem_size available_size minbucket nbuckets
is the property of a particular build of perl, and does not depend on the current process. If you do not provide the optional argument to the functions mstats_fillhash(), fill_mstats(), mstats2hash(), then the information in fields mem_size, available_size is not updated.
fill_mstats($buf) is a much cheaper call (both speedwise and memory-wise) which collects the statistic into $buf in machine-readable form. At a later moment you may need to call mstats2hash($buf, %hash) to use this information to fill %hash.
All three APIs fill_mstats($buf), mstats_fillhash(%hash), and mstats2hash($buf, %hash) are designed to allocate no memory if used the second time on the same $buf and/or %hash.
So, if you want to collect memory info in a cycle, you may call
$#buf = 999;
fill_mstats($_) for @buf;
mstats_fillhash(%report, 1); # Static info too
foreach (@buf) {
# Do something...
fill_mstats $_; # Collect statistic
}
foreach (@buf) {
mstats2hash($_, %report); # Preserve static info
# Do something with %report
}
The following examples don't attempt to show everything as that would be a monumental task, and, frankly, we don't want this manpage to be an internals document for Perl. The examples do demonstrate some basics of the raw Perl datatypes, and should suffice to get most determined people on their way. There are no guidewires or safety nets, nor blazed trails, so be prepared to travel alone from this point and on and, if at all possible, don't fall into the quicksand (it's bad for business).
Oh, one final bit of advice: take perlguts with you. When you return we expect to see it well-thumbed.
Let's begin by looking a simple scalar which is holding a string.
use Devel::Peek;
$a = 42; $a = "hello";
Dump $a;
The output:
SV = PVIV(0xbc288) at 0xbe9a8
REFCNT = 1
FLAGS = (POK,pPOK)
IV = 42
PV = 0xb2048 "hello"\0
CUR = 5
LEN = 8
This says $a is an SV, a scalar. The scalar type is a PVIV, which is capable of holding an integer (IV) and/or a string (PV) value. The scalar's head is allocated at address 0xbe9a8, while the body is at 0xbc288. Its reference count is 1. It has the POK flag set, meaning its current PV field is valid. Because POK is set we look at the PV item to see what is in the scalar. The \0 at the end indicate that this PV is properly NUL-terminated. Note that the IV field still contains its old numeric value, but because FLAGS doesn't have IOK set, we must ignore the IV item. CUR indicates the number of characters in the PV. LEN indicates the number of bytes allocated for the PV (at least one more than CUR, because LEN includes an extra byte for the end-of-string marker, then usually rounded up to some efficient allocation unit).
If the scalar contains a number the raw SV will be leaner.
use Devel::Peek;
$a = 42;
Dump $a;
The output:
SV = IV(0xbc818) at 0xbe9a8
REFCNT = 1
FLAGS = (IOK,pIOK)
IV = 42
This says $a is an SV, a scalar. The scalar is an IV, a number. Its reference count is 1. It has the IOK flag set, meaning it is currently being evaluated as a number. Because IOK is set we look at the IV item to see what is in the scalar.
If the scalar from the previous example had an extra reference:
use Devel::Peek;
$a = 42;
$b = \$a;
Dump $a;
The output:
SV = IV(0xbe860) at 0xbe9a8
REFCNT = 2
FLAGS = (IOK,pIOK)
IV = 42
Notice that this example differs from the previous example only in its reference count. Compare this to the next example, where we dump $b instead of $a.
This shows what a reference looks like when it references a simple scalar.
use Devel::Peek;
$a = 42;
$b = \$a;
Dump $b;
The output:
SV = IV(0xf041c) at 0xbe9a0
REFCNT = 1
FLAGS = (ROK)
RV = 0xbab08
SV = IV(0xbe860) at 0xbe9a8
REFCNT = 2
FLAGS = (IOK,pIOK)
IV = 42
Starting from the top, this says $b is an SV. The scalar is an IV, which is capable of holding an integer or reference value. It has the ROK flag set, meaning it is a reference (rather than an integer or string). Notice that Dump follows the reference and shows us what $b was referencing. We see the same $a that we found in the previous example.
Note that the value of RV coincides with the numbers we see when we stringify $b. The addresses inside IV() are addresses of X*** structures which hold the current state of an SV. This address may change during lifetime of an SV.
This shows what a reference to an array looks like.
use Devel::Peek;
$a = [42];
Dump $a;
The output:
SV = IV(0xc85998) at 0xc859a8
REFCNT = 1
FLAGS = (ROK)
RV = 0xc70de8
SV = PVAV(0xc71e10) at 0xc70de8
REFCNT = 1
FLAGS = ()
ARRAY = 0xc7e820
FILL = 0
MAX = 0
ARYLEN = 0x0
FLAGS = (REAL)
Elt No. 0
SV = IV(0xc70f88) at 0xc70f98
REFCNT = 1
FLAGS = (IOK,pIOK)
IV = 42
This says $a is a reference (ROK), which points to another SV which is a PVAV, an array. The array has one element, element zero, which is another SV. The field FILL above indicates the last element in the array, similar to $#$a.
If $a pointed to an array of two elements then we would see the following.
use Devel::Peek 'Dump';
$a = [42,24];
Dump $a;
The output:
SV = IV(0x158c998) at 0x158c9a8
REFCNT = 1
FLAGS = (ROK)
RV = 0x1577de8
SV = PVAV(0x1578e10) at 0x1577de8
REFCNT = 1
FLAGS = ()
ARRAY = 0x1585820
FILL = 1
MAX = 1
ARYLEN = 0x0
FLAGS = (REAL)
Elt No. 0
SV = IV(0x1577f88) at 0x1577f98
REFCNT = 1
FLAGS = (IOK,pIOK)
IV = 42
Elt No. 1
SV = IV(0x158be88) at 0x158be98
REFCNT = 1
FLAGS = (IOK,pIOK)
IV = 24
Note that Dump will not report all the elements in the array, only several first (depending on how deep it already went into the report tree).
The following shows the raw form of a reference to a hash.
use Devel::Peek;
$a = {hello=>42};
Dump $a;
The output:
SV = IV(0x8177858) at 0x816a618
REFCNT = 1
FLAGS = (ROK)
RV = 0x814fc10
SV = PVHV(0x8167768) at 0x814fc10
REFCNT = 1
FLAGS = (SHAREKEYS)
ARRAY = 0x816c5b8 (0:7, 1:1)
hash quality = 100.0%
KEYS = 1
FILL = 1
MAX = 7
RITER = -1
EITER = 0x0
Elt "hello" HASH = 0xc8fd181b
SV = IV(0x816c030) at 0x814fcf4
REFCNT = 1
FLAGS = (IOK,pIOK)
IV = 42
This shows $a is a reference pointing to an SV. That SV is a PVHV, a hash. Fields RITER and EITER are used by "each" in perlfunc.
The "quality" of a hash is defined as the total number of comparisons needed to access every element once, relative to the expected number needed for a random hash. The value can go over 100%.
The total number of comparisons is equal to the sum of the squares of the number of entries in each bucket. For a random hash of <n> keys into <k> buckets, the expected value is:
n + n(n-1)/2k
The Dump() function, by default, dumps up to 4 elements from a toplevel array or hash. This number can be increased by supplying a second argument to the function.
use Devel::Peek;
$a = [10,11,12,13,14];
Dump $a;
Notice that Dump() prints only elements 10 through 13 in the above code. The following code will print all of the elements.
use Devel::Peek 'Dump';
$a = [10,11,12,13,14];
Dump $a, 5;
This is what you really need to know as an XS programmer, of course. When an XSUB returns a pointer to a C structure that pointer is stored in an SV and a reference to that SV is placed on the XSUB stack. So the output from an XSUB which uses something like the T_PTROBJ map might look something like this:
SV = IV(0xf381c) at 0xc859a8
REFCNT = 1
FLAGS = (ROK)
RV = 0xb8ad8
SV = PVMG(0xbb3c8) at 0xc859a0
REFCNT = 1
FLAGS = (OBJECT,IOK,pIOK)
IV = 729160
NV = 0
PV = 0
STASH = 0xc1d10 "CookBookB::Opaque"
This shows that we have an SV which is a reference, which points at another SV. In this case that second SV is a PVMG, a blessed scalar. Because it is blessed it has the OBJECT flag set. Note that an SV which holds a C pointer also has the IOK flag set. The STASH is set to the package name which this SV was blessed into.
The output from an XSUB which uses something like the T_PTRREF map, which doesn't bless the object, might look something like this:
SV = IV(0xf381c) at 0xc859a8
REFCNT = 1
FLAGS = (ROK)
RV = 0xb8ad8
SV = PVMG(0xbb3c8) at 0xc859a0
REFCNT = 1
FLAGS = (IOK,pIOK)
IV = 729160
NV = 0
PV = 0
Looks like this:
SV = IV(0x24d2dd8) at 0x24d2de8
REFCNT = 1
FLAGS = (TEMP,ROK)
RV = 0x24e79d8
SV = PVCV(0x24e5798) at 0x24e79d8
REFCNT = 2
FLAGS = ()
COMP_STASH = 0x22c9c50 "main"
START = 0x22eed60 ===> 0
ROOT = 0x22ee490
GVGV::GV = 0x22de9d8 "MY" :: "top_targets"
FILE = "(eval 5)"
DEPTH = 0
FLAGS = 0x0
OUTSIDE_SEQ = 93
PADLIST = 0x22e9ed8
PADNAME = 0x22e9ec0(0x22eed00) PAD = 0x22e9ea8(0x22eecd0)
OUTSIDE = 0x22c9fb0 (MAIN)
This shows that
the subroutine is not an XSUB (since START and ROOT are non-zero, and XSUB is not listed, and is thus null);
that it was compiled in the package main;
under the name MY::top_targets;
inside a 5th eval in the program;
it is not currently executed (see DEPTH);
it has no prototype (PROTOTYPE field is missing).
Dump, mstat, DeadCode, DumpArray, DumpWithOP and DumpProg, fill_mstats, mstats_fillhash, mstats2hash by default. Additionally available SvREFCNT, SvREFCNT_inc and SvREFCNT_dec.
Readers have been known to skip important parts of perlguts, causing much frustration for all.
Ilya Zakharevich ilya@math.ohio-state.edu
Copyright (c) 1995-98 Ilya Zakharevich. All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
Author of this software makes no claim whatsoever about suitability, reliability, edability, editability or usability of this product, and should not be kept liable for any damage resulting from the use of it. If you can use it, you are in luck, if not, I should not be kept responsible. Keep a handy copy of your backup tape at hand. |
이 글은 python - What exactly are iterator, iterable, and iteration? - Stack Overflow를 번역하면서 보충, 정리한 것임을 먼저 밝힙니다.
Iteration
어떤 객체의 원소에 하나씩 차례로 접근하는 것. 명시적으로든 암묵적으로든 반복문을 사용해 객체의 여러 원소에 하나하나 접근하면 그것은 이터레이션(iteration)이다.
Iteration | Definition of Iteration by Merriam-Webster
the repetition of a sequence of computer instructions a specified number of times or until a condition is met
이터레이션은 그냥 말 그대로 반복이다. 특정 횟수만큼 또는 어떤 조건이 만족될 때까지 명령을 반복하는 것.
Iterable & Iterator
파이썬의 iterable과 iterator은 특별한 의미를 가지고 있다.
Iterable의 특징은 일단 내부 원소에 하나씩 차례로 접근할 수 있다는 점이다. 따라서 이터러블은 for item in iterable: ...의 문법을 사용할 수 있다. 그런데 이러한 문법을 수행할 수 있는 것은 이터러블이 __iter__ 메소드를 가지고 있는 객체이기 때문이다. 이것이 이터러블의 좀 더 엄밀한 정의이다. 그리고 이 __iter__ 메소스가 하는 일은 iterator를 반환하는 것이다.
Iterator는 __next__ 메소드로 데이터를 순차적으로 호출할 수 있는 객체이다. 즉 실제로 루프를 돌면서 반복 도중에 현재 위치가 어디인지를 기억하는 것은 이터레이터이다. 이터레이터는 __iter__ 메소드도 가지고 있는데, 실행할 경우 자기자신이 반환된다.
보통은 파이썬에서 for 루프나, map, list comprehension을 사용하면, Iterator의 __next__ 메소드가 자동으로 호출되고 iteration 과정을 수행한다.
아래는 코드 예시이다.
>>> s = 'cat' # s는 이터러블이다. # s는 변경이 불가능한(immutable) 문자열(str) 객체이다. # s는 상태(state)를 가지지 않는다. # s는 __getitem__ 메소드를 가진다. >>> next(s) TypeError: 'str' object is not an iterator
s는 이터러블이지 이터레이터가 아니기 때문에 next 메소드를 쓸 수 없다. iter 메소드를 사용해면 이터러블로 이터레이터를 만들 수 있다.
>>> t = iter(s) # t는 이터레이터다. # t는 상태(state)를 가지고 있다. # t는 __next__ 메소드와 __iter__ 메소드를 가진다. >>> type(s) # <class 'str'> >>> type(t) # <class 'str_iterator'>
iter을 통해 이터레이터 타입의 객체 t를 만들었다.
>>> next(t) # next() 함수는 현재값을 반환하고, 다음 상태로 넘어간다. 'c' >>> next(t) 'a' >>> next(t) 't' >>> next(t) Traceback (most recent call last): ... StopIteration
이터레이터의 __iter__ 메서드는 이터레이터 객체 자신을 돌려준다 (self iterable).
>>> iter(t) is t True
파이썬 도큐먼트는 iterable, iterator을 다음과 같이 자세하게 정의하고 있다. (최근에 파이썬 도큐먼트의 한국어판이 드디어 번역이 완료되어 공개되었다! 감사합니다 )
iterable (이터러블)
멤버들을 한 번에 하나씩 돌려줄 수 있는 객체. 이터러블의 예로는 모든 (list,str,tuple같은) 시퀀스 형들,dict같은 몇몇 비시퀀스 형들, 파일 객체들,__iter__()나 시퀀스 개념을 구현하는__getitem__()메서드를 써서 정의한 모든 클래스의 객체들이 있다. 이터러블은for루프에 사용될 수 있고, 시퀀스를 필요로 하는 다른 많은 곳 (zip(),map(), …) 에 사용될 수 있다. 이터러블 객체가 내장 함수iter()에 인자로 전달되면, 그 객체의 이터레이터를 돌려준다. 이 이터레이터는 값들의 집합을 한 번 거치는 동안 유효하다. 이터러블을 사용할 때, 보통은iter()를 호출하거나, 이터레이터 객체를 직접 다룰 필요는 없다.for문은 이것들을 여러분을 대신해서 자동으로 해주는데, 루프를 도는 동안 이터레이터를 잡아둘 이름 없는 변수를 만든다.
iterator (이터레이터)
데이터의 스트림을 표현하는 객체. 이터레이터의__next__()메서드를 반복적으로 호출하면 (또는 내장 함수next()로 전달하면) 스트림에 있는 항목들을 차례대로 돌려준다. 더 이상의 데이터가 없을 때는 대신StopIteration예외를 일으킨다. 이 지점에서, 이터레이터 객체는 소진되고, 이후의 모든__next__()메서드 호출은StopIteration예외를 다시 일으키기만 한다. 이터레이터는 이터레이터 객체 자신을 돌려주는__iter__()메서드를 가질 것이 요구되기 때문에, 이터레이터는 이터러블이기도 하고 다른 이터러블들을 받아들이는 대부분의 곳에서 사용될 수 있다. 중요한 예외는 여러 번의 이터레이션을 시도하는 코드다. (list 같은) 컨테이너 객체는iter()함수로 전달하거나for루프에 사용할 때마다 새 이터레이터를 만든다. 이런 것을 이터레이터에 대해서 수행하려고 하면, 지난 이터레이션에 사용된 이미 소진된 이터레이터를 돌려줘서, 빈 컨테이너처럼 보이게 만든다.
Notes
iterable, iterator는 파이썬에서 iteration을 가능하게 하는 객체들이다.
iterable은 iteration을 할 수 있는 객체이고, 직접 iteration을 수행하는 객체는 iterator이다.
iterable의 __iter__메소드를 이용하면 iterator 객체를 만들 수 있다.
for문은 이걸 자동으로 해주기 때문에 명시적으로__iter__메소드를 호출할 필요가 없다. |
Magento font icons usage and examples
Icons are a simple and effective way to draw users into the content of your website. They can help you structure content and separate different sections of the page. The primary goal of using icons should be to help the user find information on the page.
Icons
With icons you can quickly sum up what your text is about. Use an icon that encapsulates the point you are trying to get across in your paragraph. This will make the text more accessible to your readers.
Create an icon
example of a simple icon
You can place icons just about anywhere using simple markup. We are going to use an inline HTML element such as <span> and add appropriate classes to it. These are required classes: ic and the icon's name prefixed with ic-, for example ic-star. Here's an example of the code which will add a star icon:
<span class="ic ic-star"></span> example of a simple icon
If you change the font-size of the icon's container, the icon gets bigger. Same things goes for color, drop shadow, and anything else that gets inherited using CSS.
Icon size
ic-lg
ic-2x
ic-3x
ic-4x
To increase icon size relative to the font-size of the icon's container, use the following classes: ic-lg (increases the size of the icon by 33%), ic-2x, ic-3x, ic-4x, ic-5x, ic-6x, ic-7x or ic-8x.
<span class="ic ic-star"></span> <span class="ic ic-star ic-lg"></span> ic-lg <span class="ic ic-star ic-2x"></span> ic-2x <span class="ic ic-star ic-3x"></span> ic-3x <span class="ic ic-star ic-4x"></span> ic-4x
If your icons are getting chopped off on top and bottom,
make sure you have sufficient line-height.
Inline styles
Now you can start having more fun with icons. By default all icons have the same color as text, but if you want to change the color of selected icon, you can do it with inline CSS styles. Add the style attribute to the icon element and specify the color.
You can add inline styles to icons the same way as to any other HTML elements in a HTML document. The style attribute can contain any CSS property, such as color, font-size, text-shadow etc.
<span class="ic ic-heart-o ic-3x"></span> <span class="ic ic-heart-o ic-3x" style="color: #e91e8f;"></span> <span class="ic ic-heart-o ic-3x" style="color: #95dc24;"></span>
Animated icon
Use the ic-spin class to get any icon to rotate.
<span class="ic ic-star ic-2x ic-spin" style="color: #be64e4;"></span> <span class="ic ic-reload ic-2x ic-spin" style="color: #5bd2ec;"></span>
Examples of icons
Iconboxes
Simple iconbox
example of an iconbox
To display an icon inside a box with background color (we call it an iconbox), add the ib class to the icon element. With the optional class ib-hover, the color of the iconbox will change on mouse hover over the iconbox.
Background color will be automatically added to the icon element. Make sure to leave the <span> tag empty – otherwise the contents of the tag will be displayed together with the icon and any additional space can dislocate the icon.
<span class="ic ic-star ib ib-hover"></span> example of an iconbox
The default background color and color of the icon can be configured in the admin panel:
Theme Design > Colors > Iconbox
Iconbox size
To increase iconbox size, use the following classes: ib-size-l, ib-size-xl, ib-size-xxl, ib-size-xxxl.
The icon size is independent of the iconbox size and can be increased with classes which were described earlier. For example, add class ic-lg to make the icon a little bit bigger.
<span class="ic ic-heart-o ib ib-hover"></span> <span class="ic ic-heart-o ic-lg ib ib-hover ib-size-l"></span> <span class="ic ic-heart-o ic-lg ib ib-hover ib-size-xl"></span> <span class="ic ic-heart-o ic-2x ib ib-hover ib-size-xxl"></span> <span class="ic ic-heart-o ic-3x ib ib-hover ib-size-xxxl"></span>
Iconbox shape
To change the shape of the iconbox, use one of the following classes: ib-circle, ib-rounded, ib-square. By default the iconbox is always circular.
<span class="ic ic-star ic-lg ib ib-hover ib-size-l"></span> <span class="ic ic-star ic-lg ib ib-hover ib-size-l ib-rounded"></span> <span class="ic ic-star ic-lg ib ib-hover ib-size-l ib-square"></span>
Iconbox effects
To add eye-catching hover effects to the iconbox, use one of the following combinations of classes. Note that in each case the combination consists of two classes:
ib-ef-1 ib-ef-1a
ib-ef-1 ib-ef-1b
ib-ef-2 ib-ef-2a
ib-ef-2 ib-ef-2b
ib-ef-3 ib-ef-3a
ib-ef-3 ib-ef-3b
<span class="ic ic-plane ic-lg ib ib-size-l ib-ef-1 ib-ef-1a"></span> <span class="ic ic-plane ic-lg ib ib-size-l ib-ef-1 ib-ef-1b"></span> <span class="ic ic-plane ic-lg ib ib-size-l ib-ef-2 ib-ef-2a"></span> <span class="ic ic-plane ic-lg ib ib-size-l ib-ef-2 ib-ef-2b"></span> <span class="ic ic-plane ic-lg ib ib-size-l ib-ef-3 ib-ef-3a"></span> <span class="ic ic-plane ic-lg ib ib-size-l ib-ef-3 ib-ef-3b"></span>
Examples of iconboxes
Blocks of text with icon
Icons can help you structure content and separate different sections of the page. The primary goal of using icons should be to help the user find information on the page and with icons you can quickly sum up what your text is about. For example, when you build lists, instead of using standard bullets, you can use icons to draw attention to paragraphs and other blocks of content.
Simple block with icon
Heading Example
This is a paragraph of sample text. Using this markup you can quickly build all kinds of blocks. Icons are an effective way to...
To create a simple block of text with an icon, wrap your text inside a <div> element with the feature class. Here's a minimal example:
<div class="feature"> <span class="left ic ic-star ic-2x" style="color: #5bd2ec;"></span> <h4>Heading Example</h4> <p>This is a paragraph of sample text. Using this markup you can quickly build all kinds of blocks. Icons are an effective way to...</p> </div>
If you add left or right class to the icon, the icon will be taken from the normal flow and placed along the left or right side of its container, and text will wrap around it.
Indented block
To display a block with indentation on the left side, add the indent class to the block element:
To increase the size of the indentation, use the following classes together with the indent class: indent-size-l, indent-size-xl, indent-size-xxl, indent-size-xxxl.
<div class="feature feature-icon-hover indent"> <span class="left ic ic-star ic-2x" style="color: #de2666;"></span> <h4>Heading Example</h4> <p>This is a paragraph of sample text. Using this markup you can quickly build all kinds of blocks.</p> </div>
Block with iconbox and hover effect
To change the background color of the iconbox on mouse hover over the entire block, add the feature-icon-hover class to the block element.
If you increase the iconbox size (by adding a class such as ib-size-xl), you will also need to add corresponding class (in this case: indent-size-xl) to the block element. It will adjust the size of the indentation.
<div class="feature feature-icon-hover indent indent-size-xl"> <span class="left ic ic-star ic-lg ib ib-size-xl"></span> <h4>Heading Example</h4> <p>This is a paragraph of sample text. Using this markup you can quickly build all kinds of blocks.</p> </div>
The default background color and color of the icon can be configured in the admin panel:
Theme Design > Colors > Iconbox
More complex example
Above heading
Heading Example
Text below heading
This is a paragraph of sample text. Using this markup you can quickly build all kinds of blocks.
Example of another text paragraph inside a block. Icons are an effective way to draw users into the content of your store.
Read more...
Here's another, more complex example with additional headings and nested blocks. To change the background color of the iconbox you can use inline styles. Add the style attribute to the iconbox element and specify the background color.
<div class="feature indent indent-size-xl"> <span class="left ic ic-home ic-lg ib ib-size-xl" style="background-color: #ffb13e;"></span> <h6 class="above-heading">Above heading</h6> <h4>Heading Example</h4> <h6 class="below-heading">Text below heading</h6> <p>This is a paragraph of sample text. Using this markup you can quickly build all kinds of blocks.</p> <div class="feature feature-icon-hover indent"> <span class="ic ic-char ib">1</span> <p>Lorem ipsum dolor sit, consectetur adipiscing elit.</p> </div> <div class="feature feature-icon-hover indent"> <span class="ic ic-char ib">2</span> <p>Lid est laborum et dolorum fuga et harum quidem.</p> </div> <div class="feature feature-icon-hover indent"> <span class="ic ic-char ib">3</span> <p>Seq et perspser iciatis unde omnis iste nautis.</p> </div> <p>Example of another text paragraph inside a block. Icons are an effective way to draw users into the content of your store.</p> <a href="#">Read more...</a> </div>
Centered block
Heading Example
This is a paragraph of sample text. Using this markup you can quickly build all kinds of blocks.
To align elements of the block to the center, use the centered class.
<div class="feature centered"> <span class="ic ic-lightbulb ic-2x ib ib-size-xl" style="background-color: #bf78dd;"></span> <h4>Heading Example</h4> <p>This is a paragraph of sample text. Using this markup you can quickly build all kinds of blocks.</p> </div>
Font Awesome icons
Font Awesome is a font and icon toolkit based on CSS. It offers a collection of more than 600 vector icons which can be easily customized (the same as other font icons available in the theme).
Basic Font Awesome icons
Use the fa class and the icon's name with an inline HTML element span. Here's an example of the code which will create a flag icon:
<span class="fa fa-flag fa-3x" style="color: #1b926c;"></span>
Use Font Awesome icons with other icon classes
You can use Font Awesome icons together with other icon classes described in this document. Here's an example of an iconbox element (the ib class) with Font Awesome icon inside a block
<div class="feature feature-icon-hover indent indent-size-l"> <span class="ic ic-2x ib ib-size-l left fa fa-flag" style="background-color: #71d1b3;"></span> <h4>Heading Example</h4> <p>This is a short paragraph of sample text inside a block.</p> </div>
|
I was walking through the Django getting started tutorial, and I am getting an error running this code:
I created a database and updated the settings.py file, but I keep getting an error saying my database engine was not set (I have Xed out my name, user, and password; but it's correct in the file):
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2', # Add 'postgresql_psycopg2', 'postgresql', 'mysql'
'NAME': 'XXXXX', # Or path to database file if using sqlite3.
'USER': 'XXXXX', # Not used with sqlite3.
'PASSWORD': 'XXXXX', # Not used with sqlite3.
'HOST': '', # Set to empty string for localhost. Not used with sqlite3.
'PORT': '', # Set to empty string for default. Not used with sqlite3.
}
}
This is the error that I am getting:
Traceback (most recent call last):
File "manage.py", line 11, in <module>
execute_manager(settings)
File "/home/tdavis/webapps/food/lib/python2.6/django/core/management/__init__.py", line 438, in execute_manager
utility.execute()
File "/home/tdavis/webapps/food/lib/python2.6/django/core/management/__init__.py", line 379, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/tdavis/webapps/food/lib/python2.6/django/core/management/base.py", line 191, in run_from_argv
self.execute(*args, **options.__dict__)
File "/home/tdavis/webapps/food/lib/python2.6/django/core/management/base.py", line 220, in execute
output = self.handle(*args, **options)
File "/home/tdavis/webapps/food/lib/python2.6/django/core/management/base.py", line 351, in handle
return self.handle_noargs(**options)
File "/home/tdavis/webapps/food/lib/python2.6/django/core/management/commands/syncdb.py", line 52, in handle_noargs
cursor = connection.cursor()
File "/home/tdavis/webapps/food/lib/python2.6/django/db/backends/dummy/base.py", line 15, in complain
raise ImproperlyConfigured("You haven't set the database ENGINE setting yet.")
django.core.exceptions.ImproperlyConfigured: You haven't set the database ENGINE setting yet.
Any feedback would be much appreciated. Thanks.
The settings.py file was not UNIX formatted. You must have used an editor in another OS to edit the file. I fixed it by running,
answered
johns
That's really weird, I was using TextMate on a mac to edit that file. But you are right, it did end up working, thanks. Although I was not prompted to create a super user. Is there a default one?
answered
tdavis |
```(aka backtick or grave accent) in a single line before and after the block. See: http://commonmark.org/help/
Error while retrieving IBStore historic data with an IB paper account
Hello
I connect to an existing paper account as following:
ibstore = bt.stores.IBStore(host='127.0.0.1', port=4002, clientId=cid)
data = ibstore.getdata(dataname="aapl", historical=True, fromdate=datetime(2019, 1, 1), todate=datetime(2019, 8, 1),
timeframe=bt.TimeFrame.Minutes, compression=5)
cerebro.adddata(data)
I get the following error:
14-Aug-19 12:24:24 ERROR Exception in message dispatch. Handler 'openOrder' for 'openOrder'
Traceback (most recent call last):
File "C:\Python37-64\lib\site-packages\ib\opt\dispatcher.py", line 44, in __call__
results.append(listener(message))
File "C:\Users\ksander\correlation\backtrader\stores\ibstore.py", line 1291, in openOrder
self.broker.push_orderstate(msg)
AttributeError: 'NoneType' object has no attribute 'push_orderstate'
14-Aug-19 12:24:24 ERROR Exception in message dispatch. Handler 'orderStatus' for 'orderStatus'
Traceback (most recent call last):
File "C:\Python37-64\lib\site-packages\ib\opt\dispatcher.py", line 44, in __call__
results.append(listener(message))
File "C:\Users\ksander\correlation\backtrader\stores\ibstore.py", line 1301, in orderStatus
self.broker.push_orderstatus(msg)
AttributeError: 'NoneType' object has no attribute 'push_orderstatus'
14-Aug-19 12:24:24 ERROR Exception in message dispatch. Handler 'openOrder' for 'openOrder'
Traceback (most recent call last):
File "C:\Python37-64\lib\site-packages\ib\opt\dispatcher.py", line 44, in __call__
results.append(listener(message))
File "C:\Users\ksander\correlation\backtrader\stores\ibstore.py", line 1291, in openOrder
self.broker.push_orderstate(msg)
AttributeError: 'NoneType' object has no attribute 'push_orderstate'
14-Aug-19 12:24:24 ERROR Exception in message dispatch. Handler 'orderStatus' for 'orderStatus'
Traceback (most recent call last):
File "C:\Python37-64\lib\site-packages\ib\opt\dispatcher.py", line 44, in __call__
results.append(listener(message))
File "C:\Users\ksander\correlation\backtrader\stores\ibstore.py", line 1301, in orderStatus
self.broker.push_orderstatus(msg)
AttributeError: 'NoneType' object has no attribute 'push_orderstatus'
14-Aug-19 12:24:24 ERROR Exception in message dispatch. Handler 'openOrder' for 'openOrder'
Traceback (most recent call last):
File "C:\Python37-64\lib\site-packages\ib\opt\dispatcher.py", line 44, in __call__
results.append(listener(message))
File "C:\Users\ksander\correlation\backtrader\stores\ibstore.py", line 1291, in openOrder
self.broker.push_orderstate(msg)
AttributeError: 'NoneType' object has no attribute 'push_orderstate'
14-Aug-19 12:24:24 ERROR Exception in message dispatch. Handler 'orderStatus' for 'orderStatus'
Traceback (most recent call last):
File "C:\Python37-64\lib\site-packages\ib\opt\dispatcher.py", line 44, in __call__
results.append(listener(message))
File "C:\Users\ksander\correlation\backtrader\stores\ibstore.py", line 1301, in orderStatus
self.broker.push_orderstatus(msg)
AttributeError: 'NoneType' object has no attribute 'push_orderstatus'
14-Aug-19 12:24:24 ERROR Exception in message dispatch. Handler 'openOrder' for 'openOrder'
Traceback (most recent call last):
File "C:\Python37-64\lib\site-packages\ib\opt\dispatcher.py", line 44, in __call__
results.append(listener(message))
File "C:\Users\ksander\correlation\backtrader\stores\ibstore.py", line 1291, in openOrder
self.broker.push_orderstate(msg)
AttributeError: 'NoneType' object has no attribute 'push_orderstate'
14-Aug-19 12:24:24 ERROR Exception in message dispatch. Handler 'orderStatus' for 'orderStatus'
Traceback (most recent call last):
File "C:\Python37-64\lib\site-packages\ib\opt\dispatcher.py", line 44, in __call__
results.append(listener(message))
File "C:\Users\ksander\correlation\backtrader\stores\ibstore.py", line 1301, in orderStatus
self.broker.push_orderstatus(msg)
AttributeError: 'NoneType' object has no attribute 'push_orderstatus'
It seems to be that the server starts pushing order status etc to the client and the BackBroker does not know what to do with them. Quite logical. When I add an IBBroker as following:
ibstore = bt.stores.IBStore(host='127.0.0.1', port=4002, clientId=cid)
cerebro.broker = ibstore.getbroker()
data = ibstore.getdata(dataname="aapl", historical=True, fromdate=datetime(2019, 1, 1), todate=datetime(2019, 8, 1),
timeframe=bt.TimeFrame.Minutes, compression=5)
cerebro.adddata(data)
that error is gone but I will soon get another one:
Traceback (most recent call last):
File "C:/Users/ksander/correlation/bt_katse.py", line 66, in <module>
cerebro.broker.setcash(100000.0)
AttributeError: 'IBBroker' object has no attribute 'setcash'
Also logical - the live broker cannot let you set your cash indeed :)
The question is, what is the proper way to download the historical data from the IB if I cannot use an existing paper account with standing orders etc because of that issue?
Does the BackBroker need some exception handling?
Thanks in advance.
It seems to be that it was a mistake to query with a real client ID. That works:
ibstore = bt.stores.IBStore(host='127.0.0.1', port=4002, clientId=666)
data = ibstore.getdata(dataname="aapl", historical=True, fromdate=datetime(2019, 1, 1), todate=datetime(2019, 8, 1),
timeframe=bt.TimeFrame.Minutes, compression=5)
cerebro.adddata(data)
# Set our desired cash start
cerebro.broker.setcash(100000.0)
# Print out the starting conditions
print('Starting Portfolio Value: %.2f' % cerebro.broker.getvalue())
# Run over everything
cerebro.run()
I was too optimistic. It does not raise an exception but it neither retrieves any data. It would be surprising if it did with a fictional client id.
Please help.
clientIdidentifiers are not fictional. They simply let you identify the client issuing operations againstTWS. When you don't specify one, you see everything which is what you obviously suffered at the beginning. (although you seem to use theGateway)
It has nothing to do with data retrieval. Data retrieval is governed by data permissions which are the same for the paper trading account and the real trading account. You have to explicitly share the data permissions and only use them from one (this policy may have been subject to change overtime, check with
IB)
As the documentation already points out,
aapl(which should be written asAAPL) is a bad example for data download, given the fact it is listed in multiple exchanges and currencies, which means you have to be specific about what you actually want (callIBand let them know you simply wantaaplif you don't like that policy)
Please read: Docs - Live Trading - Interactive Brokers and check the actual data permissions your paper trading account.
In the beginning, I used my real account ID which I would not like to disclose here. It is a 7-digit number: 105xxxx. That resulted in the Exception from the BackBroker. I am sorry that I failed to indicate it in my first post.
Later, I tried a fictional client id such as 666 to see what happens.
It is not an issue of permissions since I do get the historic data with a test client that I wrote from scratch with the same clientId, including AAPL. Yes, I am using the GW.
The exception is raised regardless I use "aapl" or "AAPL". I suspect that it happens before the historical data download.
Sorry but you are confused. Your account number has
NOTHINGto do with aclientId. TheclientIdis an identifier forYOU. If you have multiple clients connecting toTWS/IBGateway, you can distinguish the clients using theclientId.
Yes, I know I am confused. Nothing to be sorry about. :) Thanks for the clarification about the clientId.
However, I am getting even more confused. The outcomes of using my account ID for the clientId and an arbitrary number such as 666 differ. How could THAT be???
This results in the exception described in my first post:
ibstore = bt.stores.IBStore(host='127.0.0.1', port=4002, clientId=105****) # my account number, digits replaced with asterisks
data = ibstore.getdata(dataname="AAPL", historical=True, fromdate=datetime(2019, 1, 1), todate=datetime(2019, 8, 1),
timeframe=bt.TimeFrame.Minutes, compression=5)
cerebro.adddata(data)
This does not raise an exception but does not retrieve any data either:
ibstore = bt.stores.IBStore(host='127.0.0.1', port=4002, clientId=6666666)
data = ibstore.getdata(dataname="AAPL", historical=True, fromdate=datetime(2019, 1, 1), todate=datetime(2019, 8, 1),
timeframe=bt.TimeFrame.Minutes, compression=5)
cerebro.adddata(data)
Check what the setting for
master Client Idis ... https://www.interactivebrokers.com/en/software/tws/usersguidebook/configuretws/apisettings.htm
Thanks, I had completely forgotten about this setting.
The field was empty. However, I filled it with 6666666 and now BOTH of the code snippets in my last post raise the same Exception while a similar request with clientId = 7777777 does not. That's a bit odd but goes beyond the scope of the Backtrader discussion.
However, I still cannot retrieve the historical data and I cannot understand what I am doing wrong. Here is the whole code (based on the First Strategy tutorial):
from _datetime import datetime
import backtrader as bt
# Create a Stratey
class TestStrategy(bt.Strategy):
def log(self, txt, dt=None):
''' Logging function for this strategy'''
dt = dt or self.datas[0].datetime.date(0)
print('%s, %s' % (dt.isoformat(), txt))
def __init__(self):
# Keep a reference to the "close" line in the data[0] dataseries
self.dataclose = self.datas[0].close
def next(self):
# Simply log the closing price of the series from the reference
self.log('Close, %.2f' % self.dataclose[0])
if __name__ == '__main__':
# Create a cerebro entity
cerebro = bt.Cerebro()
ibstore = bt.stores.IBStore(host='127.0.0.1', port=4002, clientId=7777777)
#ibstore = bt.stores.IBStore(host='127.0.0.1', port=4002, clientId=6666666)
data = ibstore.getdata(dataname="AAPL", historical=True, fromdate=datetime(2019, 1, 1), todate=datetime(2019, 8, 1),
timeframe=bt.TimeFrame.Minutes, compression=5)
cerebro.adddata(data)
cerebro.addstrategy(TestStrategy)
cerebro.broker.setcash(100000.0)
# Print out the starting conditions
print('Starting Portfolio Value: %.2f' % cerebro.broker.getvalue())
# Run over everything
cerebro.run()
# Print out the final result
print('Final Portfolio Value: %.2f' % cerebro.broker.getvalue())
The output is:
Starting Portfolio Value: 100000.00 Server Version: 76 TWS Time at connection:20190817 12:16:29 Final Portfolio Value: 100000.00
The breakpoint in the next() is never reached, no Exceptions are raised.
To be sure about the permissions, I just downloaded the 5 minute AAPL bars for the last 2 years with my own client.
Many thanks for your time.
The breakpoint in the next() is never reached, no Exceptions are raised.
You are ignoring the data and store notifications. You are using
AAPL, which, as stated above, is a non-deterministic ticker for Interactive Brokers. This is in the documentation. Your other client must already fill information for a specific target.
Again.
As the documentation already points out, aapl (which should be written as AAPL) is a bad example for data download, given the fact it is listed in multiple exchanges and currencies, which means you have to be specific about what you actually want (call IB and let them know you simply want aapl if you don't like that policy)
Please read: Docs - Live Trading - Interactive Brokers
The
AAPLcase is clearly explained there.
The sample code for
ibdoes also show how to usedataandstorenotifications to understand that the platform is telling you it cannot fulfill your request.
That's right. This works:
data = ibstore.getdata(dataname="AAPL-STK-SMART-USD", historical=True, fromdate=datetime(2019, 1, 1), todate=datetime(2019, 8, 1), timeframe=bt.TimeFrame.Minutes, compression=5)
I am sorry for having been a nuisance. BT is an impressive project and the forums show that you provide prompt and straight-to-the-point support even to stupid questions. Keep up the good work!
bazar gurulast edited by
@kriku I am trying to replicate the same scenario for Indian stock . I have a paper account with IB.
from _datetime import datetime
import backtrader as bt
# Create a Stratey
class TestStrategy(bt.Strategy):
def log(self, txt, dt=None):
''' Logging function for this strategy'''
dt = dt or self.datas[0].datetime.date(0)
print('%s, %s' % (dt.isoformat(), txt))
def __init__(self):
# Keep a reference to the "close" line in the data[0] dataseries
self.dataclose = self.datas[0].close
def next(self):
# Simply log the closing price of the series from the reference
self.log('Close, %.2f' % self.dataclose[0])
if __name__ == '__main__':
# Create a cerebro entity
cerebro = bt.Cerebro()
ibstore = bt.stores.IBStore(host='127.0.0.1', port=4002, clientId=7777777)
#ibstore = bt.stores.IBStore(host='127.0.0.1', port=4002, clientId=6666666)
data = ibstore.getdata(dataname="RELIANCE-STK-SMART-IND", historical=True, fromdate=datetime(2019, 1, 1), todate=datetime(2019, 8, 1), timeframe=bt.TimeFrame.Minutes, compression=5)
cerebro.adddata(data)
cerebro.addstrategy(TestStrategy)
cerebro.broker.setcash(100000.0)
# Print out the starting conditions
print('Starting Portfolio Value: %.2f' % cerebro.broker.getvalue())
# Run over everything
cerebro.run()
# Print out the final result
print('Final Portfolio Value: %.2f' % cerebro.broker.getvalue())
But this is not giving me any data. Getting below output :
Starting Portfolio Value: 100000.00 Final Portfolio Value: 100000.00
I have the permission enabled in IB to share the live account data with paper account. Am i missing something ? |
That's probably the approach I'll take. I was just wondering if there was some inbuilt functionality, but it's not much work to do, so thanks. Have a nice day!
[SOLVED] Custom language in admin panel
Hello, recently I finished the translation of the admin control panel files at transifex to my language (pt-BR). Is it possible to currently put that translation to work on my nodebb so I can fine tune/improve it?
The javascript programmers (rofl) think people doesn't deserve a reply. Open $ource. (really hard to find github link in nodebb.org)
This is an active forum. Posts can get buried. It also takes time to reply to everyone.
And yes the Github link should be clearly on the site but Google is your friend.
I'm not very acquainted with Transifex, and I don't know if it's possible to test the translations out ahead of time. But if it is possible to download the translations, then you can copy them into the correct directory in
/public/src/languageand then start NodeBB with the new translations.
Thanks for your translations.
Well, I have no time for discuting bs with js;
The admin language files are inside [nodebb]/public/language/YOURFORUMLANGUAGE/admin/
I have made a python3 script to make an organized
admin/directory from the files downloaded from transifex. (You should download all files one by one, should be 30 or 40, takes 7~10 minutes)
Make a new directory
Dowload all translated .json files to that directory
They have names like for_use_nodebb_admin-advanced-events_pt_BR.json.
Save this script in a file called freak.python3in the directory you created.
#!/usr/bin/env python3
import os
import glob
# use os.system("") to do it lil boy
import subprocess
# Files downloaded from transifex has this string in filename.
# This should be removed from them; change with yours.
language_str = "_pt_BR"
json_files = glob.glob('*.json', recursive=False)
dirs_created = list()
for file in json_files:
ripped = file.split('-', 2) # 2 here is a hack to avoid splitting ip-blacklist.json and web-crawler.json
json_filename = ripped[-1].replace(language_str,'')
print(ripped)
if len(ripped) == 3:
directory = "admin/{0}/".format(ripped[1])
elif len(ripped) == 2:
directory = "admin/"
else:
directory = "ERROR/" # if this directory appears something went wrong
if not directory in dirs_created:
dirs_created.append(directory)
print(directory)
os.system("mkdir -p {0}".format(directory))
print(json_filename)
os.system("cp {0} {1}{2}".format(file, directory, json_filename))
CHANGE THE LINE.language_str = "_pt_BR"WITH YOUR LANGUAGE KEEPING THE _ before.
Make it executable: chmod +x freak.python3
Execute it: ./freak.python3
This will create an admin/directory inside the directory, tar it and send to server:tar zcvf admin.tgz admin/
scp admin.tgz SSHUSER@SSHSERVER:or whetever you use.
@priapo hey that's pretty neat. Thanks for sharing.
Transifex has CLI client - so you don't need to manually download all translation files.
Yep, we use the tx client to pull translations from Transifex. |
Issue
The virt-who fails to report guests and hosts mapping with error as below,
# less /var/log/rhsm/rhsm.log
[ERROR] @virt-who.py:206 - Error in communication with subscription manager, trying to recover:
Traceback (most recent call last):
File "/usr/share/virt-who/virt-who.py", line 190, in _send
result = self.subscriptionManager.hypervisorCheckIn(self.options.esx_owner, self.options.esx_env, virtualGuests)
File "/usr/share/virt-who/subscriptionmanager.py", line 92, in hypervisorCheckIn
return self.connection.hypervisorCheckIn(owner, env, mapping)
File "/usr/lib64/python2.4/site-packages/rhsm/connection.py", line 678, in hypervisorCheckIn
return self.conn.request_post(url, host_guest_mapping)
File "/usr/lib64/python2.4/site-packages/rhsm/connection.py", line 484, in request_post
return self._request("POST", method, params)
File "/usr/lib64/python2.4/site-packages/rhsm/connection.py", line 443, in _request
self.validateResponse(result)
File "/usr/lib64/python2.4/site-packages/rhsm/connection.py", line 468, in validateResponse
raise RestlibException(response['status'], error_msg)
RestlibException: undefined local variable or method `hypervisor' for #<Class:0x000000053cbb58>
2013-09-04 05:49:11,252 [DEBUG] @subscriptionmanager.py:89 - Sending update in hosts-to-guests mapping: {806dd54d-c637-e011-8bb1-ef6d3f96e567: [42341b36-0580-a5a7-e479-4ef57441201c, 4234417e-61f3-8acf-0ad5-384b1a56a7ff]}
Environment
Red Hat Enterprise Linux(RHEL)
Subscription Asset Manager(SAM)
VMware ESX
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase of over 48,000 articles and solutions. |
Python vs. Java: Comparing Two Popular Programming Languages
In this article, we'll be comparing the features of two server-side programming languages; Python and Java. Let's begin with some design differences in both the languages.
Fundamental differences in the design and implementation of Python and Java
History
Having an idea of the past can provide us the context to understand why things were built the way they are now. Python was created to bridge the gap between C and the shell. It was intended to be a higher-level interpreted language that enables clean, concise, and readable code.
Java was created to be a compiled, platform-independent Object-Oriented programming language. The intention was to achieve code portability (write once run everywhere) with little or no programmer effort. One of the early applications of Java was incorporation into browsers like Netscape, and it soon became popular.
You'll find most of the legacy systems and enterprise-level web applications programmed in Java than in any other language. And you'd often find Python being used a "glue" to combine different components in these systems.
Design
There are a lot of great resources on the internet that explains the design differences in-depth, so we won't dive into those, but instead, I'll mention a few "simplified" takeaways,
The most fundamental design difference is that Python is an interpreted language, and Java is a compiled language. This difference dictates a lot of the features and limitations of both languages.
Any programming language must translate the code written by the programmer into a set of instructions, or machine code, that can be executed on the machine. In interpreted languages, this process happens on-the-fly while executing the program, whereas compiled languages do some pre-processing before executing the program.
Java compiler converts the code into a platform-independent bytecode, which can then be loaded and executed on any instance of Java Virtual Machine (JVM). Similarly, the Python code is processed into Python bytecode and runs in Python Virtual Machine.
However, the difference is that while Python compiles to bytecode at runtime, Java compiles in advance. Java runtime also consists of a Just-in-time (JIT) compiler, which improves efficiency by being able to compile the bytecode into machine code in "almost real-time."
Consequences of the design and history
On semantics
Python is a dynamically-typed language, meaning it infers variable types on its own. Java, on the other hand, is a statically-typed language, which means variable types should be declared explicitly.
In Python, you can worry less about variable types and focus more on the logic. So if you write Pythonic code (Example), you can do more in less lines of code as compared to Java. And on top of that, the indentation rules make the code inherently more readable.
Java is strict in the sense that the programmers need to write verbose code. Many mistakes can be caught during the compile time in Java. You have more flexibility and control in terms of adhering to various design patterns in Java as compared to in Python.
On performance
Many luxuries in the CPython implementation of Python (the most widely used Python implementation) come at the cost of,
A slower run time because of more work needed to translate Python code to machine-level code. Java does many things like type checking, locating memory addresses for different identifiers. During pre-processing (generation of bytecode), and static typing provides opportunities for optimization during the run time as well.
More chances of getting errors (related to type checking and conversions) during run time.
A higher memory footprint of objects in Python.
Concurrency in Python
CPython implements a Global Interpreter Lock to ensure thread-safety, which means,
Only one thread can execute at a time on a CPU, even if you have a multi-core processor.
In essence, you can create multiple threads, but they run turn-by-turn instead of running in parallel (concurrency without parallelism). Parallel I/O is still possible (and happens) among multiple threads.
To achieve parallelism with processing, you need the program to spawn separate processes and coordinate with them. These processes can be instances of interpreters executing Python code or low-level programs like C-extensions.
Python provides some abstraction for performing multiprocessing through the built-in multiprocessing module. For parallelization of I/O related tasks, Python included asyncio module which received significant usability and performance improvements in the recent Python 3.7.x version.
Concurrency in Java
Java Virtual Machine (JVM) is capable of executing multiple threads in parallel on multiple CPU cores. The programmers have to deal with the complexities of dividing their tasks into threads and synchronization between them. Java provides Thread class and Java .util.concurrent package containing some abstractions for multi-threading. The fact that most of the popular distributed computation frameworks (like Spark and Hadoop) are primarily written in Java is evidence of its suitability for concurrent execution.
Note: We discussed the most popular implementation of Python (CPython) in this section. There are other implementations as well, which make some other trade-offs for the sake of performance and to support parallel execution (take a look at the pypy project and Stackless Python, which supports JIT compilation and concurrency).
Comparing simple iterative and recursive programs in Python and Java
We'll take two well known mathematical problem statements,
Compute n'th value in the Fibonacci sequence.
Compute factorial of n.
Following are the simple implementations for the same, you can observe some of the differences that we discussed in above section in the code and the results.
# Python version 3.8.0 (CPython implementation)
def fib(n):
# Iterative fibonacci
a, b = 0, 1
for i in range(0, n):
a, b = b, a + b
return a
def fib_r(n):
# Recursive fibonacci
if n < 2: return n
return fib_r(n-1) + fib_r(n-2)
def fac(n):
# Iterative factorial
x = 1
for i in range(2, n + 1):
x = x * i
def fac_r(n):
# Recursive factorial
if n >= 1:
return n * fac_r(n - 1)
return 1
# Printing out the run times, the value of n is decided based on execution times and maximum stack depth
print(timeit.timeit(lambda: fib(60), number=1) * 1000)
print(timeit.timeit(lambda: fib_r(40), number=1))
print(timeit.timeit(lambda: fac_r(25), number=1) * 1000)
print(timeit.timeit(lambda: fac(25), number=1) * 1000)
/*
Java version 11.0.3
Please excuse me for using `snake_case` in the program.
*/
public class SimpleMethodsPrimitive {
public static void main(String args[]) {
long start_time = System.nanoTime();
fib(60);
// fib_r(40);
// fac_r(25);
// fac(25);
long stop_time = System.nanoTime();
// Printing out run time in nanoseconds
System.out.println(stop_time - start_time);
}
private static long fib(int n) {
// Iterative fibonacci
long a = 0, b = 1;
for (int i = 0; i < n; i++) {
a = b;
b = a + b;
}
return a;
}
private static int fib_r(int n) {
// Recursive fibonacci
return n < 2 ? n: fib_r(n-1) + fib_r(n-2);
}
private static long fac(int n) {
// Iterative factorial
long x = 1;
for (int i = 2; i < n + 1; i++) {
x = x * i;
}
return x;
}
private static long fac_r(int n) {
// Recursive factorial
return n < 1 ? 1: n * fac_r(n -1 );
}
}
Beyond design - The Development Ecosystem, Libraries and Frameworks
Developer productivity is an essential factor in deciding which language to choose from. Let's take a look at the ecosystem and libraries that support developer productivity.
Dependency management and Code distribution
Java code is packaged and distributed in the form of .jar files, whereas in Python, it is distributed in the form of .whl files. Package management in Java is relatively stable but more complex to learn.
In Python, pip is pretty much what you'll need to know about in most of the use-cases to manage dependencies. PyPI (Python's package index) is the place where the packages are hosted so that anyone can use them.
PyPI's equivalent in Java is MVNRepository, and the dependencies are specified in the configuration files of build-automation tools like Apache Maven and Gradle. Python now has built-in support from virtual environments (isolated dependency environment specific to projects); a similar thing can be achieved in Java using classpaths.
Libraries and Frameworks supporting typical web development
Java has a strong JDBC (Java DataBase Connectivity) API for and connecting to databases, which is also the reason why Java language has been the popular choice among the enterprise systems. Python's database access layers are slightly more challenging to deal with, as compared to Java. Both languages have ORM capabilities.
It is tough to write the entire backend from scratch, so both the languages have frameworks that provide an abstraction to set-up a reliable and secure backend without reinventing the wheel. Spring is by far the most popular web frameworks in Java, whereas Django and Flask are the two popular web frameworks in Python.
In terms of performance, Java web frameworks are faster, but the Python frameworks are also not far behind (see the benchmarks here). Spring has a LOT of production-friendly dependencies to deal with caching, authentication, databases, messaging, and whatnot, which means the developers can focus just on business logic. The downside of Spring is the big learning curve it has (because of things like dependency injection, verbose configurations, and more), with some developers even describing their early learning experiences as "black magic." It is also much more resource-intensive as compared to Django or Flask. The resource overhead of Spring can sometimes seem unjustified for small-to-medium size web applications.
Debugging and Testing
Both the languages are easy to debug, but I've personally found stack-traces and exceptions in Python to be more helpful. Another thing that sticks out for me is the build-time is usually much faster in Python as compared to Java (since Python is an interpreted language), which is excellent when you are doing hit-and-trial style debugging. This might be true because Java codebases are typically more substantial and more complex.
If you use a modern IDE or static code analysis tools, these can prevent many errors beforehand in both the languages, and be able to add breakpoints to inspect the variables during runtime.
Java has various popular libraries at various levels of abstraction (like Junit, TestNG, PowerMock) to unit-test your code. Python has a built-in unit test library whose design was inspired by Java's JUnit framework. Other higher-level frameworks for unit-testing in Python include pytest and nose. Unit testing in Python requires slightly more effort because of its dynamically typed nature.
When it comes to Behavior Driven Development, the most popular BDD framework in Python is behave, followed by pytest plugins like pytest-bdd. In Java, popular choices are Cucumber and Spock. Selenium, the most popular web-automation testing framework, is primarily written in Java. It is easier to find solutions to your issues when you're using their Java API (Selenium has a Python API, too) to do things like end-to-end automation testing.
Documentation also helps in debugging and testing well. Python has a built-in doctest module that mixes well with the interactive nature of the language, as it helps in writing interactive statements in the documentation that serve the purpose of explaining as well as testing (this way you've fewer chances of outdated documentation). Similar functionality is very complex to replicate in Java.
Community
Python's community has been proliferating due to its suitability in domains beyond web applications (like Data Analytics, Image Processing, Machine Learning, and more). According to Github's Octoverse, Python was the second most used language on Github, followed by Java. In Stackoverflow's 2019 developer survey, Python was crowned the fastest-growing programming language edging out Java this year.
Who's using Java and Python in web development?
Below is a list of well-known companies that use Java in web development:
And here is a list of companies that use Python in web development:
Conclusion
In this article, we discussed the differences between Java and Python. We can safely say that both of these languages suitable server-side web-development. If you're about to build a very "enterprisey" web application where performance and security are critical, then Java still has the upper hand despite Python's fast-growing ecosystem. On the other hand, if you have experienced Python developers and care more about developer productivity, or have to deal with things like extensive number crunching, image processing, analytics, then Python has the edge over Java. |
와 같은 목록에있는 요소의 합 또는 곱이 필요한 경우
>>> foo = [10, 5, 3, 4]
sum를 사용할 수 있습니다 또는 prod numpy의 함수
>>> import numpy as np
>>> np.sum(foo)
22
>>> np.prod(foo)
600
마찬가지로 누적 합계 나 제품이 필요할 때 np.cumsum를 사용할 수 있습니다 또는 np.cumprod
>>> np.cumsum(foo)
array([10, 15, 18, 22])
>>> np.cumprod(foo)
array([ 10, 50, 150, 600])
임의 reduce의 누적 결과를 얻을 수있는 방법이 있습니까 작업?
예를 들어 다음과 같은 기능이 있다면
def my_fn(a, b):
return a + b**2
functools.reduce를 사용할 수 있습니다 얻기 위해
>>> from functools import reduce
>>> reduce(my_fn, foo)
60
내가 찾는 것은 다음을 제공하는 기능입니다
>>> cumreduce(my_fn, foo)
[10, 35, 44, 60]
즉, 결과의 각 요소는
reduce(my_fn, foo[:i])와 같습니다..<시간>
물론, 순진한 방법으로이 작업을 수행 할 수 있습니다
>>> def cumreduce(fn, seq):
... return [reduce(fn, seq[:i]) for i in range(1, len(seq)+1)]
>>> cumreduce(my_fn, foo)
[10, 35, 44, 60]
이상적으로는 동일하거나 유사한 기능을 갖춘 내장 함수를 찾고 있습니다.
답변 # 1
파이썬에서 찾고있는 것은
itertools.accumulate입니다. :
import itertools
[*itertools.accumulate(foo,my_fn)]
# [10, 35, 44, 60]
Numpy ufuncs에는 종종 축적 방법이 있습니다. 예 :
np.bitwise_xor.accumulate(foo)
array([10, 15, 12, 8])
np.add.accumulate(foo)
array([10, 15, 18, 22])
# cf. cumsum:
np.cumsum(foo)
array([10, 15, 18, 22]
관련 자료
excel - 직원이 일한 날에 대한 부울 항목을 만들 수있는 VBA 함수가 있습니까?
VBA에서이 기능을 수행하는 더 현명한 방법이 있습니까?
python - 파일의 행을 읽은 다음 특정 수의 행이 될 때까지 추가
python - 정수 X를 0X에 매핑하는 함수가 있습니까?
포인터가 가리키는 할당 된 메모리 크기를 반환하는 함수가 C에없는 이유는 무엇입니까?
Python의 여러 코드 줄에 대한 함수 래퍼
table() function in r - r의 table () 함수 - 예를 들어 dplyr에 더 좋은 방법이 있습니까?
sql - unpivot과 함께 집계 함수
TypeScript에 함수가 변수를 확실히 정의 할 것이라고 선언하는 방법이 있습니까?
flutter - Futter에서 x 초 후 showDialog에서 응답이없는 경우 함수를 호출하는 가장 좋은 방법은 무엇입니까?
javascript - 이것은 화살표 함수 선언입니까? 그런 것이 있습니까?
postgresql - 타임 스탬프에 대한 최대 기능이 있습니까 (시간대 포함 또는 제외)?
python - C #에 numpytile () 함수에 해당하는 것이 있습니까?
functional programming - Curry의 N 항 함수와 Prolog의 N + 1 항 관계간에 차이점이 있습니까?
import - len 함수가 잘못된 단어 길이를 표시하는 이유가 있습니까 (공백 없음 등)?
javascript - $$eval 함수에 전역 변수를 전달하는 방법이 있습니까?
javascript - 누군가 메시지에 반응했는지 확인하는 기능이 있습니까? (discordjs)
reactjs - 후크 useState에 반응하도록 함수를 설정하는 더 좋은 방법이 있습니까?
dataframe - R에 bind_rows () 및 bind_cols () 함수가 있습니까?
MongoDb에서 문자열 중간에 공백을 제거하는 기능이 있습니까?
python - Pandas는 dtypes에서 case가 의미하는 바가 있습니까?
python - 어떤 numpy 인덱스가 복사이고 어느 것이 뷰입니까?
3D numpy 배열을 2 그룹으로 나누기 python
python - matplotlib 등고선 플롯 채우기
python - 비용 함수가 상승하는 신경망의 버그
python runtimeerror - numpy 설치 오류
python - npangle에서 반환 된 부정확 한 위상
arrays - python/numpy에서 슬라이싱 할 때 '둘러싸 기'
python - nparray ([1, "a"])가 21 자의 유니 코드 문자열을 사용하는 이유는 무엇입니까?
파이썬 목록에서 2D 배열을 만드는 방법은 무엇입니까? |
グラフ一括表示の際のタイトルの表示
前回、subplots、subplotでのX軸、Y軸の表示範囲を指定する方法を解説しました。
今回はsubplotsでグラフを一括表示した際にタイトルを表示する方法を解説していきます。
そして次回はもう一つのグラフ一括表示の方法 subplotでのタイトルの表示方法を解説します。
ということでまずは基本となるグラフを紹介します。
from matplotlib import pyplot as plt
x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
y1 = [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]
y2 = [2, 4, 8, 16, 32, 64, 128, 256, 512, 1024]
y3 = [1, 3, 5, 7, 9, 11, 13, 15, 17, 19]
y4 = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
fig = plt.figure()
axes= fig.subplots(2)
axes[0].plot(x, y1)
axes[0].plot(x, y2)
axes[1].plot(x, y3)
axes[1].plot(x, y4)
plt.show()
実行結果
データはX値が一つ、Y値がy1からy4までの4つで、値自体は前回と同じです。
ただグラフを表示する数は2つにして、一つ目のグラフにy1、y2を、二つ目のグラフにy3、y4をプロットしています。
このグラフを基本として、タイトルを表示していきましょう。
タイトルの表示:失敗編
まずはタイトルを表示してみましょう。
前に一つのグラフを表示した際のタイトルの表示は「plt.title(“タイトル名”)」でした。
これで表示されるか試してみましょう。
from matplotlib import pyplot as plt
x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
y1 = [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]
y2 = [2, 4, 8, 16, 32, 64, 128, 256, 512, 1024]
y3 = [1, 3, 5, 7, 9, 11, 13, 15, 17, 19]
y4 = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
fig = plt.figure()
axes= fig.subplots(2)
axes[0].plot(x, y1)
axes[0].plot(x, y2)
axes[1].plot(x, y3)
axes[1].plot(x, y4)
plt.title("Title")
plt.show()
実行結果
二つのグラフの間に「Title」が表示されました。
とりあえずこれじゃないので、正解を解説していきましょう。
グラフ全体のタイトルを表示
複数のグラフを一括表示するということは、タイトルとしては全体のタイトルとそれぞれのグラフのタイトルの2種類があります。
まずはグラフ全体のタイトルを付けてみましょう。
その場合には「suptitle(“タイトル”)」 を用います。
from matplotlib import pyplot as plt
x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
y1 = [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]
y2 = [2, 4, 8, 16, 32, 64, 128, 256, 512, 1024]
y3 = [1, 3, 5, 7, 9, 11, 13, 15, 17, 19]
y4 = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
fig = plt.figure()
axes= fig.subplots(2)
axes[0].plot(x, y1)
axes[0].plot(x, y2)
axes[1].plot(x, y3)
axes[1].plot(x, y4)
plt.suptitle("Title")
plt.show()
実行結果
上の方に「Title」と表示されました。
ちなみにフォントサイズを変更したい場合は「fontsize=X」を追加することで可能です。
from matplotlib import pyplot as plt
x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
y1 = [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]
y2 = [2, 4, 8, 16, 32, 64, 128, 256, 512, 1024]
y3 = [1, 3, 5, 7, 9, 11, 13, 15, 17, 19]
y4 = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
fig = plt.figure()
axes= fig.subplots(2)
axes[0].plot(x, y1)
axes[0].plot(x, y2)
axes[1].plot(x, y3)
axes[1].plot(x, y4)
plt.suptitle("Title", fontsize=20)
plt.show()
実行結果
ちなみに前、matplotlibの解説した際はフォントサイズの変更に「{“fontsize”: X}」を追加しました。
この形でも認識されるのか試してみましょう。
from matplotlib import pyplot as plt
x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
y1 = [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]
y2 = [2, 4, 8, 16, 32, 64, 128, 256, 512, 1024]
y3 = [1, 3, 5, 7, 9, 11, 13, 15, 17, 19]
y4 = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
fig = plt.figure()
axes= fig.subplots(2)
axes[0].plot(x, y1)
axes[0].plot(x, y2)
axes[1].plot(x, y3)
axes[1].plot(x, y4)
plt.suptitle("Title", {"fontsize": 20})
plt.show()
実行結果
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-4-6d82abcbda3e> in <module>
15 axes[1].plot(x, y4)
16
---> 17 plt.suptitle("Title", {"fontsize": 20})
18
19 plt.show()
TypeError: suptitle() takes 1 positional argument but 2 were given
ダメでした。
最近は自分でもfontsize=Xという書き方をしているので、{“fontsize”:X}という書き方はあまり一般的ではないのかもしれません。
また時間ができたら調査してみたいと思います。
それぞれのグラフにタイトルを表示
それぞれのグラフにタイトルを表示するには、「set_title(“タイトル”)」というコマンドを用います。
from matplotlib import pyplot as plt
x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
y1 = [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]
y2 = [2, 4, 8, 16, 32, 64, 128, 256, 512, 1024]
y3 = [1, 3, 5, 7, 9, 11, 13, 15, 17, 19]
y4 = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
fig = plt.figure()
axes= fig.subplots(2)
axes[0].plot(x, y1)
axes[0].plot(x, y2)
axes[1].plot(x, y3)
axes[1].plot(x, y4)
plt.suptitle("Title", fontsize=20)
axes[0].set_title("Title1")
axes[1].set_title("Title2")
plt.show()
実行結果
表示されましたが、他の箇所とかぶってしまっていて、みづらい状況になってしまっています。
この場合、「subplots_adjust」というコマンドを使って調整します。
subplotsを使う際にはこのコマンドはよく使うと思うので、また機会を作って細かく解説していきます。
ということで今回は紹介だけ。
from matplotlib import pyplot as plt
x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
y1 = [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]
y2 = [2, 4, 8, 16, 32, 64, 128, 256, 512, 1024]
y3 = [1, 3, 5, 7, 9, 11, 13, 15, 17, 19]
y4 = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
fig = plt.figure()
axes= fig.subplots(2)
axes[0].plot(x, y1)
axes[0].plot(x, y2)
axes[1].plot(x, y3)
axes[1].plot(x, y4)
plt.suptitle("Title", fontsize=20)
axes[0].set_title("Title1")
axes[1].set_title("Title2")
plt.subplots_adjust(top=0.85, hspace=0.5)
plt.show()
実行結果
かぶらない様に表示できました。
それぞれのグラフのタイトルのフォントサイズを変えるには「fontsize=X」で変更することができます。
from matplotlib import pyplot as plt
x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
y1 = [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]
y2 = [2, 4, 8, 16, 32, 64, 128, 256, 512, 1024]
y3 = [1, 3, 5, 7, 9, 11, 13, 15, 17, 19]
y4 = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
fig = plt.figure()
axes= fig.subplots(2)
axes[0].plot(x, y1)
axes[0].plot(x, y2)
axes[1].plot(x, y3)
axes[1].plot(x, y4)
plt.suptitle("Title", fontsize=20)
axes[0].set_title("Title1", fontsize=20)
axes[1].set_title("Title2", fontsize=20)
plt.subplots_adjust(top=0.85, hspace=0.5)
plt.show()
実行結果
それぞれのグラフのタイトルのフォントサイズを変更することができました。
今回はsubplotsを使った際のタイトルの表示を解説しました。
最初にお話しした通り、次回はsubplotを使った際のタイトルの表示を解説していきたいと思います。
ということで今回はこんな感じで。 |
I am using package listings to import my Python source code into my LaTeX document. I use the command \lstinputlistings. I have a Python source like
class MyClass(Yourclass):
def __init__(self, my, yours):
bla bla bla...
What should I write in my \lstset command in order to highlight words MyClass, __init__ etc.?I wouldn't want to write any word I want to be highlighted. I tried using moredelims=[s][\color{teal}]{class}{(} inside lstset but it doesn't work. |
How much slower than C are modern compiled languages? With the plethora of programming languages at the world's disposal, is performance the real decider? This experiment benchmarks a recursion algorithm in C, Rust, Nim and Go to try and get an hint of an answer to this popular question.
Prime number generator: Sieve of Eratosthenes
The algorithm chosen to conduct this experiment is the recursive version of a prime number generator called the sieve of Eratosthenes. The particular test used in the benchmark consists in generating the first 50k prime numbers and print in the console the last 10 found. To make the comparison between the different languages as fair as possible, we start with a C implementation and transpile it as precisely as possible to the other languages. This way we are really comparing how each compiler deals with very similar code instructions. Here is the implementation:
#include <stdio.h>
#define NUM_PRIMES 50000
static unsigned int size = 0;
static unsigned int n = 2;
void sieve(unsigned int * primes) {
for(unsigned int i = 0; i < size ; i++) {
if(n % primes[i] == 0){
n++;
sieve(primes);
return;
}
}
primes[size] = n;
size++;
}
int main() {
unsigned int primes[NUM_PRIMES];
while(size < NUM_PRIMES) {
sieve(primes);
}
for(unsigned int k = NUM_PRIMES-10; k < NUM_PRIMES; k++) {
printf("%u\n", primes[k]);
}
}
It was very important to keep some key elements the same across languages. First, the variable types used. In C, there is a significant gain in perfomance going from intto unsigned int for example. Second, the use of a fixed size array to store all the prime numbers to avoid the overhead of dynamic allocation. And finally, maximising reference passing between function calls for maximal speed. All the implementations in other languages follow this C template. First up, our usual language of choice: Rust.
Rust version
The Rust implementation had to be different on a couple of points to fit the language specifications. To allow for recursive calls pure functions have to be used hence there can not be any static variable access. The sieve function has to take extra parameters for this reason. Also the base type for array indices is usize which is not the unsigned int (or u32in Rust) we have used in the C code. So extra casting operations are done from u32 to usize to keep the compiler happy. Here is the code:
fn main() {
const NUM_PRIMES : usize = 50000;
let mut i : u32 = 2;
let mut size : u32 = 0;
let mut primes : [u32; NUM_PRIMES] = [0; NUM_PRIMES];
fn sieve(v: &mut [u32; NUM_PRIMES], n: &mut u32, s: &mut u32) {
for i in 0..*s {
if *n % v[i as usize] == 0 {
*n += 1;
sieve(v, n, s);
return;
}
}
v[*s as usize] = *n;
*s += 1;
}
while size < NUM_PRIMES as u32 {
sieve(&mut primes, &mut i, &mut size);
}
for k in NUM_PRIMES-10..NUM_PRIMES {
println!("{}", primes[k]);
}
}
Despite these few points we can recognise the logic of the C code quite easily. Will these extra casting operations performed be costly to performance? We will get soon back to this point. Next, the Nim implementation.
Nim version
Even though garbage collection seems to be its default use case, the Nim compiler is very flexible as the language supports reference passing and even pointers for the bravest of coders. Our implementation will only use reference passing for the prime number array which keeps the code relatively clean and very similar to the C implementation.
const NUM_PRIMES : uint32 = 50000
var size : uint32 = 0
var n : uint32 = 2
proc sieve(primes: ref seq[uint32]) =
var i : uint32 = 0
while i < size:
if n mod primes[][i] == 0:
inc(n)
sieve(primes)
return
inc(i)
primes[][size] = n
inc(size)
when isMainModule:
var primes : ref seq[uint32]
new(primes)
primes[] = newSeq[uint32](NUM_PRIMES)
while(size < NUM_PRIMES):
sieve(primes)
for k in countup(NUM_PRIMES-10,NUM_PRIMES-1):
echo primes[][k]
While we are dabbling with the modern garbage-collected but flexible languages, we would need to add Go as the last language in our comparison study.
Go version
The translation from C to Go is fairly straightfoward given how closely related to C the Go syntax was chosen to be. For this reason, there is nothing to add as a general commentary to this implementation other than the code.
package main
import "fmt"
const NUM_PRIMES uint32 = 50000
func sieve(v *[NUM_PRIMES]uint32, n *uint32, s *uint32) {
var i uint32 = 0
for i < *s {
if *n % v[i] == 0 {
*n++
sieve(v, n ,s)
return
}
i++
}
v[*s] = *n
*s++
}
func main() {
var size uint32 = 0
var i uint32 = 2
var primes [NUM_PRIMES]uint32
for size < NUM_PRIMES {
sieve(&primes, &i, &size)
}
for k := NUM_PRIMES-10; k < NUM_PRIMES; k++ {
fmt.Printf("%d\n", primes[k])
}
}
Now that we have all our implementations ready, let's compile and compare the results.
Benchmarking results
All testing has been performed on a MacPro OSX High Sierra Desktop running on two 2.4 GHz 6-Core Intel Xeon and 12 GB of RAM. Not the purest testing environment but this exercise is meant to be a rough guide and not a testing standard. All programs were compiled as release builds without any other optimisation options:
Rust Cargo: cargo build --release sieve
Nim: nim c -d:release sieve.nim
Go: go build -buildmode=exe sieve.go
The time terminal application was used on the executable files generated. Here are the results for 10 consecutive runs of the executable in each language:
Based on this set of measurements, Nim is the clear winner as it is matching C's performance and stability. Rust and Go are lagging behind but staying pretty close. The sample size is clearly not big enough to get a good statistical certainty on the ranking but, given how close all these results are from each other, we do not need to dig further to come to a reasonable conclusion...
Conclusion
These modern compiled languages can be optimised to run at very comparable speed to the equivalent C implementation. So if the open source communities behind these language write libraries with performance in mind, execution speeds similar to the old established low level languages can be expected. But if performance is not the criterion to look into to pick one language over another, then what should it be? Well, it is probably more relevant to look at how appealing the syntax looks, the package manager, the community, interoperability, safety and so on. There is also a message for teams designing languages in this: Nailing performance alone is not enough to popularise adoption, you need to bring something new to the table.
If you like this post, don't forget to follow me on Twitter and get notified of the next publication. |
mic, speakerを使って,
オウムがえしするraspi
ドアを開けたら『帰ります』と中から返事が来るようにしたraspi
を作成.
refs:
主な流れはこちらを参考にした
http://qiita.com/kinpira/items/75513eaab6eed19da9a3
途中で音が出なくなったりした時に参考にした
http://www.yam-web.net/raspberry-pi/music.html
タクトスイッチを組み込む時に参考にした
http://robocad.blog.jp/archives/678444.html
準備
pyaudioを入れる必要があるので
sudo apt-get install python-pyaudio
でinstall.
あとはrefsを参考に, マイクとスピーカーを使えるようにする.
options snd slots=snd_usb_audio,snd_bcm2835
options snd_usb_audio index=0
options snd_bcm2835 index=1
してrebootしたらマイクが動くようになった.
ただし, 音が出なくなっていたことに気づいたので色々いじった.
結局,
amixer cset numid=3 1
を実行すると音が出るようになった.
あとは, pyaudioを使って, 録音と発音を一気にやればおk.
面倒になってきたのでやっつけなことがよく分かるcodeは下記.
codes
switch input
import RPi.GPIO as GPIO
from time import sleep
GPIO.setmode(GPIO.BCM)
#GPIO.setup(25, GPIO.OUT)
GPIO.setup(14, GPIO.IN)
try:
while True:
if GPIO.input(14) == GPIO.HIGH:
print(1)
else:
print(0)
sleep(0.01)
except KeyboardInterrupt:
pass
GPIO.cleanup()
スイッチを押すとオウムがえしするraspi
# -*- coding: utf-8 -*-
import RPi.GPIO as GPIO
from time import sleep
import pyaudio
import wave
import threading
class AudioPlayer(object):
""" A Class For Playing Audio """
def __init__(self, audio_file):
self.audio_file = audio_file
self.playing = threading.Event() # 再生中フラグ
def run(self):
""" Play audio in a sub-thread """
audio = pyaudio.PyAudio()
input = wave.open(self.audio_file, "rb")
output = audio.open(format=audio.get_format_from_width(input.getsampwidth()),
channels=input.getnchannels(),
rate=input.getframerate(),
output=True)
while self.playing.is_set():
data = input.readframes(CHUNK)
if len(data) > 0:
# play audio
output.write(data)
else:
# end playing audio
self.playing.clear()
# stop and close the output stream
output.stop_stream()
output.close()
# close the input file
input.close()
# close the PyAudio
audio.terminate()
def play(self):
""" Play audio. """
if not self.playing.is_set():
self.playing.set()
self.thread = threading.Thread(target=self.run)
self.thread.start()
def wait(self):
if self.playing.is_set():
self.thread.join()
def stop(self):
""" Stop playing audio and wait until the sub-thread terminates. """
if self.playing.is_set():
self.playing.clear()
self.thread.join()
def rec_wav(CHUNK):
FORMAT = pyaudio.paInt16
CHANNELS = 1 #モノラル
RATE = 32000 #サンプルレート
RECORD_SECONDS = 5 #録音する時間の長さ
WAVE_OUTPUT_FILENAME = "file.wav"
audio = pyaudio.PyAudio()
stream = audio.open(format=FORMAT, channels=CHANNELS,
rate=RATE, input=True,
input_device_index=0, #デバイスのインデックス番号
frames_per_buffer=CHUNK)
print ("recording...")
frames = []
for i in range(0, int(RATE / CHUNK * RECORD_SECONDS)):
data = stream.read(CHUNK)
frames.append(data)
print ("finished recording")
stream.stop_stream()
stream.close()
audio.terminate()
waveFile = wave.open(WAVE_OUTPUT_FILENAME, 'wb')
waveFile.setnchannels(CHANNELS)
waveFile.setsampwidth(audio.get_sample_size(FORMAT))
waveFile.setframerate(RATE)
waveFile.writeframes(b''.join(frames))
waveFile.close()
if __name__ == "__main__":
GPIO.setmode(GPIO.BCM)
GPIO.setup(14, GPIO.IN)
CHUNK = 32000
try:
while True:
if GPIO.input(14) == GPIO.HIGH:
print(1)
rec_wav(CHUNK)
player1 = AudioPlayer("file.wav")
player1.play()
else:
print(0)
sleep(0.01)
except KeyboardInterrupt:
pass
GPIO.cleanup()
>
CHUNK = 32000はdeviceに依存するらしい.
raspiではwebcamのマイクがこの値以下でないとoverflowと怒られた.
macbook proの側面のマイクだと1024で動いていた.
result
タクトかましてるだけ.
動画:
sound parrot, raspberrypi
https://youtu.be/YbmjT60wcRk
ドアを開けたら『帰ります』と中から返事が来るようにしたraspi
タクトスイッチをチルトセンサーに変え,
ドアの開閉の揺れでONになるように微妙に傾ける.
『帰ります』の音声(kaerimasu.wav)を事前に録音しておいて, チルトのスイッチで再生するだけのシンプル機構.
# -*- coding: utf-8 -*-
import RPi.GPIO as GPIO
from time import sleep
import pyaudio
import wave
import threading
class AudioPlayer(object):
""" A Class For Playing Audio """
def __init__(self, audio_file):
self.audio_file = audio_file
self.playing = threading.Event() # 再生中フラグ
def run(self):
""" Play audio in a sub-thread """
audio = pyaudio.PyAudio()
input = wave.open(self.audio_file, "rb")
output = audio.open(format=audio.get_format_from_width(input.getsampwidth()),
channels=input.getnchannels(),
rate=input.getframerate(),
output=True)
while self.playing.is_set():
data = input.readframes(CHUNK)
if len(data) > 0:
# play audio
output.write(data)
else:
# end playing audio
self.playing.clear()
# stop and close the output stream
output.stop_stream()
output.close()
# close the input file
input.close()
# close the PyAudio
audio.terminate()
def play(self):
""" Play audio. """
if not self.playing.is_set():
self.playing.set()
self.thread = threading.Thread(target=self.run)
self.thread.start()
def wait(self):
if self.playing.is_set():
self.thread.join()
def stop(self):
""" Stop playing audio and wait until the sub-thread terminates. """
if self.playing.is_set():
self.playing.clear()
self.thread.join()
def rec_wav(CHUNK):
FORMAT = pyaudio.paInt16
CHANNELS = 1 #モノラル
RATE = 32000 #サンプルレート
RECORD_SECONDS = 5 #録音する時間の長さ
WAVE_OUTPUT_FILENAME = "file.wav"
audio = pyaudio.PyAudio()
stream = audio.open(format=FORMAT, channels=CHANNELS,
rate=RATE, input=True,
input_device_index=0, #デバイスのインデックス番号
frames_per_buffer=CHUNK)
print ("recording...")
frames = []
for i in range(0, int(RATE / CHUNK * RECORD_SECONDS)):
data = stream.read(CHUNK)
frames.append(data)
print ("finished recording")
stream.stop_stream()
stream.close()
audio.terminate()
waveFile = wave.open(WAVE_OUTPUT_FILENAME, 'wb')
waveFile.setnchannels(CHANNELS)
waveFile.setsampwidth(audio.get_sample_size(FORMAT))
waveFile.setframerate(RATE)
waveFile.writeframes(b''.join(frames))
waveFile.close()
if __name__ == "__main__":
GPIO.setmode(GPIO.BCM)
GPIO.setup(14, GPIO.IN)
CHUNK = 32000
try:
while True:
if GPIO.input(14) == GPIO.HIGH:
print(1)
#rec_wav(CHUNK)
player1 = AudioPlayer("kaerimasu.wav")
player1.play()
sleep(5)
else:
print(0)
sleep(0.01)
except KeyboardInterrupt:
pass
GPIO.cleanup()
result
こんな感じです.
チルトは横向きにしたんですが, その微妙な調節が難しかった.
本当はドアのノックに合わせてon/offにしたかったけど, 上手くいかず.
苦し紛れの対応として, ドアを思いっきり開けた後にちょっと揺らすというズルをしております.
前の回路のタクトがチルトに変わっただけ.
USBマイク(webcam)もついてるけどココでは使ってない.
壁にくっつけた時の雑実装感たるや
動画:
I’ll go home, sound test by tilt sensor when people open the door of my room.
https://youtu.be/im4rrRee318 |
Использование custom functions в парсерах OSSIM
Доброго дня, уважаемые!
В продолжение моей статьи хочу рассмотреть и поделиться опытом работы с функционалом «custom functions», используемом в OSSIM. Это функции, которые предназначены для обработки полученной вследствие разбора (парсинга) журналов событий информации. Обработка может заключаться в разрешении имени по IP адресу, определении геолокации и всего того, на что хватит фантазии. В примере ниже я разберу вариант использования «custom functions» для дополнительного парсинга полученной информации.
Предположим, что вы извлекаете журналы событий из базы данных (далее — БД), как я описывал в статье. И случилось так, что в одном из полей БД у Вас лежит не одно какое-то значение, например «имя пользователя» или «IP адрес», а целая строка сообщения из которой нужно выделить ключевые слова (например, строка вида «issued command: ls /root; result: ACCEPT»). И Вам из данной строки нужно получить текст команды (ls /root) и результат ее выполнения (ACCEPT).
Очевидно, что стандартным функционалом, доступным для источников журналов событий типа «mysql», это сделать невозможно. Тут-то и приходит на помощь функционал «custom functions». С его помощью из полученной строки мы сможем выделить интересующие нас фрагменты информации. Итак, приступим.
На базе примера из статьи необходимо выделить информацию об отданной команде (все, что следует после «command:») и результате ее выполнения (все, что следует после «result:») из строки лога, хранимой в базе данных в поле «message». Записать текст команды в поле «userdata3», а результат выполнения команды в поле «userdata4».
Пример таблицы из БД:
Для решения задачи будет выполнено:
Для того, чтобы использовать свою собственную функцию в парсере OSSIM необходимо создать новый файл, например:
И добавить функции в файл в виде:
Функции пишутся на python.
Я написал следующую функцию для выделения информации об отданной команде из текста события:
Как видно, данная функция с помощью регулярного выражения получает необходимую информацию (все, что находится после «command:» и до последней ";". Говорю последняя т.к. в теле команды тоже могут присутствовать ";") и возвращает ее агенту OSSIM для дальнейшей обработки парсером.
Аналогично пишем вторую функцию и добавляем ее в файл:
В итоге, файл «/usr/share/alienvault/ossim-agent/plugins/db_logs_func.cfg» имеет вид:
Подробно на разъяснении всей конфигурации парсера останавливаться не буду, т.к. это уже сделано ранее в примере.
Для сообщения парсеру о необходимости использования файла с созданными функциями нужно в секцию [config] конфигурационного файла парсера дописать дописать следующую строку:
Для использования созданных функций в парсере OSSIM используется конфигурационная строка вида:
<поле OSSIM>={<имя функции>(<параметр>)}
Этой строкой мы сообщаем агенту OSSIM в какое поле схемы описания события OSSIM нужно поместить информацию, получаемую применением функции к параметру. А параметром в нашем случае является информация, полученная из БД из поля «message», т.е. текст события вида «issued command: show arp; result: ACCEPT».
Поля OSSIM в нашем примере будут: userdata3, userdata4
Функции, соответственно: «def parse_command» и «def parse_result»
Параметром будет "$4"
В итоге, строки, которые нужно ддобавить в конфигурационный файл парсера выглядят так:
Ниже приведен итоговый фрагмент (секция query) конфигурационного файла парсера OSSIM:
После выполнения данных манипуляций необходимо перезапустить агента OSSIM:
После перезапуска не лишним будет проследить за сообщениями в файле лога на предмет ошибок (вдруг где-то закралась):
Если все сделано правильно, то в графическом интерфейсе OSSIM можно увидеть разбор события. Примерно такой, как на рисунке 1.
Здесь хотелось бы сказать, что первая моя мысль была попробовать передавать два параметра в функцию, чтобы в поля userdata3 и userdata4 записывать разные части из исходного текста.
Например, передавая (текст, 1) получать команду, а (текст, 2) — соответственно, результат. Такое решение мне кажется наиболее изящным.
Я даже написал под это функцию, которая отрабатывает, если запускать ее в командной строке сервера. Но вот агент OSSIM никак не желает принимать два параметра, только один.
Я обратился в alienvault с этим вопросом, но пока ответа не получил. Если у кого есть размышления на эту тему, прошу, пишите в личку или комментарии.
Заранее спасибо!
В продолжение моей статьи хочу рассмотреть и поделиться опытом работы с функционалом «custom functions», используемом в OSSIM. Это функции, которые предназначены для обработки полученной вследствие разбора (парсинга) журналов событий информации. Обработка может заключаться в разрешении имени по IP адресу, определении геолокации и всего того, на что хватит фантазии. В примере ниже я разберу вариант использования «custom functions» для дополнительного парсинга полученной информации.
1. Для чего это нужно?
Предположим, что вы извлекаете журналы событий из базы данных (далее — БД), как я описывал в статье. И случилось так, что в одном из полей БД у Вас лежит не одно какое-то значение, например «имя пользователя» или «IP адрес», а целая строка сообщения из которой нужно выделить ключевые слова (например, строка вида «issued command: ls /root; result: ACCEPT»). И Вам из данной строки нужно получить текст команды (ls /root) и результат ее выполнения (ACCEPT).
Очевидно, что стандартным функционалом, доступным для источников журналов событий типа «mysql», это сделать невозможно. Тут-то и приходит на помощь функционал «custom functions». С его помощью из полученной строки мы сможем выделить интересующие нас фрагменты информации. Итак, приступим.
2. Формулировка задачи
На базе примера из статьи необходимо выделить информацию об отданной команде (все, что следует после «command:») и результате ее выполнения (все, что следует после «result:») из строки лога, хранимой в базе данных в поле «message». Записать текст команды в поле «userdata3», а результат выполнения команды в поле «userdata4».
Пример таблицы из БД:
+---------------------+----------+----------------------+----------+--------------------------------------------+
| date | event_id | event_type | username | message |
+---------------------+----------+----------------------+----------+--------------------------------------------+
| 2016-07-22 17:17:05 | 283 | type 1 | net_adm | issued command: show arp; result: ACCEPT |
| 2016-07-22 17:17:49 | 284 | suspicious activity | operator | issued command: show arp; result: ACCEPT |
| 2016-07-22 17:17:50 | 285 | suspicious activity | admin | issued command: show arp; result: ACCEPT |
| 2016-07-22 17:17:51 | 286 | suspicious activity | guest | issued command: show arp; result: ACCEPT |
| 2016-07-22 17:17:52 | 287 | type 1 | unknown | issued command: show arp; result: ACCEPT |
| 2016-07-22 17:17:53 | 288 | type 1 | valeriy | issued command: show arp; result: ACCEPT |
| 2016-07-22 17:17:54 | 289 | suspicious activity | alex | issued command: show arp; result: ACCEPT |
| 2016-07-22 17:17:55 | 290 | type 1 | cisco | issued command: show arp; result: ACCEPT |
| 2016-07-22 17:17:57 | 291 | suspicious activity | net_adm | issued command: show arp; result: ACCEPT |
+---------------------+----------+----------------------+----------+--------------------------------------------+
3. Решение
Для решения задачи будет выполнено:
создание функции для выделения информации о команде;
создание функции для выделения информации о результате выполнения команды;
дополнительная настройка парсера (ранее созданного в примере).
Для того, чтобы использовать свою собственную функцию в парсере OSSIM необходимо создать новый файл, например:
/usr/share/alienvault/ossim-agent/plugins/db_logs_func.cfg
И добавить функции в файл в виде:
Start Function <имя функции> <тело функции>End Function
Функции пишутся на python.
3.1. Создание функции для выделения информации о команде
Я написал следующую функцию для выделения информации об отданной команде из текста события:
def parse_command(input):
res = re.search(r'command:.*;', input)
return (res.group(0).split(": ")[1].strip(";"))
Как видно, данная функция с помощью регулярного выражения получает необходимую информацию (все, что находится после «command:» и до последней ";". Говорю последняя т.к. в теле команды тоже могут присутствовать ";") и возвращает ее агенту OSSIM для дальнейшей обработки парсером.
3.2. Создание функции для выделения информации о результате выполнения команды
Аналогично пишем вторую функцию и добавляем ее в файл:
def parse_result(input):
res = re.search(r'result:\s+\S+', input)
return (res.group(0).split(": ")[1])
В итоге, файл «/usr/share/alienvault/ossim-agent/plugins/db_logs_func.cfg» имеет вид:
Start Function parse_command
def parse_command(input):
res = re.search(r'command:.*;', input)
return (res.group(0).split(": ")[1].strip(";"))
End Function
Start Function parse_result
def parse_result(input):
res = re.search(r'result:\s+\S+', input)
return (res.group(0).split(": ")[1])
End Function
3.3. Дополнительная настройка парсера
Подробно на разъяснении всей конфигурации парсера останавливаться не буду, т.к. это уже сделано ранее в примере.
Для сообщения парсеру о необходимости использования файла с созданными функциями нужно в секцию [config] конфигурационного файла парсера дописать дописать следующую строку:
custom_functions_file=/etc/ossim/agent/plugin/db_logs_func.cfg
Для использования созданных функций в парсере OSSIM используется конфигурационная строка вида:
<поле OSSIM>={<имя функции>(<параметр>)}
Этой строкой мы сообщаем агенту OSSIM в какое поле схемы описания события OSSIM нужно поместить информацию, получаемую применением функции к параметру. А параметром в нашем случае является информация, полученная из БД из поля «message», т.е. текст события вида «issued command: show arp; result: ACCEPT».
Поля OSSIM в нашем примере будут: userdata3, userdata4
Функции, соответственно: «def parse_command» и «def parse_result»
Параметром будет "$4"
В итоге, строки, которые нужно ддобавить в конфигурационный файл парсера выглядят так:
userdata3={parse_command($4)}
userdata4={parse_result($4)}
Ниже приведен итоговый фрагмент (секция query) конфигурационного файла парсера OSSIM:
[query]
query="select event_id, date, event_type, username, message from data_table where event_id > $1;"
#order by event_id desc limit 1
regexp=
ref=0
date={normalize_date($1)}
plugin_sid={translate($2)}
username={$3}
userdata1={$4}
userdata2={$2}
userdata3={parse_command($4)}
userdata4={parse_result($4)}
После выполнения данных манипуляций необходимо перезапустить агента OSSIM:
/etc/init.d/ossim-agent restart
После перезапуска не лишним будет проследить за сообщениями в файле лога на предмет ошибок (вдруг где-то закралась):
tail -f /var/log/alienvault/agent/agent.log|grep ERROR
Если все сделано правильно, то в графическом интерфейсе OSSIM можно увидеть разбор события. Примерно такой, как на рисунке 1.
Рисунок 1– Разобранные события в интерфейсе OSSIM
4. Усовершенствование
Здесь хотелось бы сказать, что первая моя мысль была попробовать передавать два параметра в функцию, чтобы в поля userdata3 и userdata4 записывать разные части из исходного текста.
Например, передавая (текст, 1) получать команду, а (текст, 2) — соответственно, результат. Такое решение мне кажется наиболее изящным.
Я даже написал под это функцию, которая отрабатывает, если запускать ее в командной строке сервера. Но вот агент OSSIM никак не желает принимать два параметра, только один.
Я обратился в alienvault с этим вопросом, но пока ответа не получил. Если у кого есть размышления на эту тему, прошу, пишите в личку или комментарии.
Заранее спасибо! |
Feb 102019
No password on postgres user fix:
pg_hba.conf
local all all trust
ALTER USER postgres with password 'newpassword';
Then you can add your user account as a superuser:
ALTER ROLE ryan with SUPERUSER;
# then restart the server
sudo /etc/init.d/postgresql restart
You'll probably need to change your pg_hba.conf file back to something like this:
local all trust
host all 127.0.0.1 255.255.255.255 trust
host booktown 192.168.1.3 255.255.255.255 ident sales
host all 192.168.1.4 255.255.255.255 ident audit
Feb 032019
I got tired of trying to write to files because of the write limitations and switched everything over to postgres. Now I remember how well it works, but also how many problems can arise. The cool thing about my recent script is that it solves a lot of issues all in one go. Since I will by completing rows in the DB in multiple increments I had to check if it exists and then check which part to update. In other words I had to SELECT, UPDATE, and INSERT in a few different ways. Here's the code:
import re
import datetime
import psycopg2
import json
with open('./data/database.json') as f:
DATABASE = json.load(f)
class DBTest:
def __init__(self, keyword, results):
self.con = psycopg2.connect(**DATABASE)
self.cur = con.cursor()
self.mkeyword = keyword
self.results = results
self.pg_2 = 'https://www.amazon.com/s/ref=sr_pg_2?rh=i%3Aaps%2Ck%3AXbox+One+Controller+Stand&page=2&keywords=Xbox+One+Controller+Stand'
def updater(self, domain):
if domain == 'www.ebay.com':
self.cur.execute("UPDATE keyword_pages SET ebay_results='" + self.results + "' WHERE keyword='" + self.mkeyword + "'")
self.con.commit()
elif domain == 'www.etsy.com':
self.cur.execute("UPDATE keyword_pages SET etsy_results='" + self.results + "' WHERE keyword='" + self.mkeyword + "'")
self.con.commit()
elif domain == 'www.amazon.com':
self.cur.execute("UPDATE keyword_pages SET amazon_results='" + self.results +
"', amazon_pg2='" + self.pg_2 + "' WHERE keyword='" + self.mkeyword + "'")
self.con.commit()
def test(self):
self.cur.execute("""SELECT * FROM keyword_pages WHERE NOT complete AND amazon_results
!= 'blank' AND ebay_results != 'blank' AND etsy_results != 'blank'""")
rows = self.cur.fetchall()
for row in rows:
print(row[0])
self.cur.execute("select exists(select keyword from keyword_pages where keyword='" + self.mkeyword + "')")
exists = self.cur.fetchone()[0]
if exists:
self.updater('www.etsy.com')
else:
columns = "keyword, amazon_results, amazon_pg2, ebay_results, etsy_results, complete"
values = "'pogo stick', 'blank', 'blank', '14', 'blank', 'f'"
self.cur.execute('INSERT INTO keyword_pages (' + columns + ') VALUES (' + values + ')')
self.con.commit()
self.con.close()
class LinkGen:
def __init__(self):
self.link_pieces = []
self.links = []
self.keywords = {
'extra black coffee': [['www.amazon.com', '4', '/jumprope/s?ie=UTF8&page=2&rh=i%3Aaps%2Ck%3Ajumprope'], ['www.ebay.com', '5'], ['www.etsy.com', '7']],
'decaf coffee': [['www.amazon.com', '5', 'https://www.amazon.com/s/ref=sr_pg_2?rh=i%3Aaps%2Ck%3Ablack+coffee&page=2&keywords=black+coffee&ie=UTF8&qid=1549211788'],
['www.ebay.com', '3'], ['www.etsy.com', '9']],
}
# How Amazon identifies if a link is internal/new or external/old (very simple actually)
def qid(self):
return round((datetime.datetime.today() - datetime.datetime(1970, 1, 1)).total_seconds())
def amazon_gen(self, search_term, page_total, page_2):
self.link_pieces = ['https://www.amazon.com/s/ref=sr_pg_', '?rh=', '&page=', '&keywords=', '&ie=UTF8&qid=']
rh = re.search('rh=([^&|$]*)', str(page_2), re.IGNORECASE).group(1)
print(rh)
all_links = []
for page in range(1, int(page_total) + 1):
all_links.append(
f'{self.link_pieces[0]}{page}{self.link_pieces[1]}{rh}{self.link_pieces[2]}{page}{self.link_pieces[3]}{"+".join(search_term.split(" "))}{self.link_pieces[4]}')
return all_links
def link_gen(self, domain, search_term, page_total):
if domain == 'www.ebay.com':
self.link_pieces = ['https://www.ebay.com/sch/i.html?_nkw=', '&rt=nc&LH_BIN=1&_pgn=']
elif domain == 'www.etsy.com':
self.link_pieces = ['https://www.etsy.com/search?q=', '&page=']
all_links = []
for page in range(1, int(page_total) + 1):
all_links.append(f'{self.link_pieces[0]}{"+".join(search_term.split(" "))}{self.link_pieces[1]}{page}')
return all_links
def test(self):
for keyword in self.keywords.keys():
for results in keywords[keyword]:
if results[0] == 'www.amazon.com':
self.links.append(self.amazon_gen(keyword, results[1], results[2]))
else:
self.links.append(self.link_gen(results[0], keyword, results[1]))
print(self.links)
if __name__ == "__main__":
links = LinkGen()
db = DBTest('pogo stick', '15')
db.test()
Since I had to dig through many other project's code base to figure a lot of this out, not to mention Google, I figured I should put what I collected here so I can find it later.
-- psql
****
sudo apt install postgresql
sudo service postgresql start
sudo su - postgres
createuser --superuser ryan
psql # <- command line tool for making queries
\password ryan
\q # <- exit psql to create new users/dbs or import/export db's (psql is for sql)
createdb ryan # or whatever# exit and now you can run psql in your own console with your username.
***
#start automatically
sudo systemctl enable postgresql
# do database commands
psql -d <database>
alter user ryan with encrypted password <password>;
sudo -i -u ryan
# export
pg_dump -U ryan ebay_keywords > database-dec-18.txt --data-only
# importable export
pg_dump -U ryan ebay_keywords > database-dec-18.pgsql
# Import
psql reviewmill_scraped < database-dec-18.pgsql
CREATE TABLE keyword_pages (
keyword VARCHAR(255) NOT NULL PRIMARY KEY,
amazon_results VARCHAR(16),
amazon_pg2 VARCHAR(255),
ebay_results VARCHAR(16),
etsy_results VARCHAR(16),
complete BOOLEAN NOT NULL
);
ALTER TABLE keyword_pages ALTER COLUMN etsy_results TYPE VARCHAR(16);
INSERT INTO keyword_pages (keyword, amazon_results, amazon_pg2, ebay_results, etsy_results, complete)
VALUES ('extra strong coffee', 12, 'https://www.amazon.com/s/ref=sr_pg_2?rh=i%3Aaps%2Ck%3Ablack+coffee&page=2&keywords=black+coffee&ie=UTF8&qid=1549211788', 12, 4, 'f');
CREATE TABLE reviews (
review_id VARCHAR(30) PRIMARY KEY,
asin VARCHAR(20) NOT NULL
);
ALTER TABLE reviews
ADD CONSTRAINT asin FOREIGN KEY (asin)
REFERENCES products (asin) MATCH FULL;
# Extra stuff
ALTER TABLE reviews ALTER COLUMN asin TYPE varchar(30);
ALTER TABLE reviews ADD COLUMN review_helpful INTEGER;
Windows has been making this increasingly difficult, but I think I've avoided the worst of my connection issues. First Windows decided to automatically detect my proxies, then I find out that my ethernet card driver had some power saving crap on, and I've been having random permission issues between WSL, pipenv, and postgres.
ipconfig /release
ipconfig /renew
I haven't tried this yet, but if my internet starts acting up again this is going to be my first thing to try, considering restarting windows seems to fix it I think this should as well.
Nov 092018
I finally got around to working on my Amazon project again.
Misc Notes
# Change postgres data directory
File path:
/etc/postgresql/10/main/postgresql.conf
File System Headache
I decided to clean up my hard drives, but I forgot how much of a headache it was trying to get an NTFS drive to work with transmission-daemon. Whatever I'll just save to my EX4 partition for now and fix it later.
*Update
I bricked my OS install and had to go down a 3 hour nightmare trying to fix it. I eventually discovered that it was a label from my old partition mount point in the fstab file. Solution:
sudo nano /etc/fstab
# comment out old label
ctrl + o to save
ctrl + x to exit
reboot
My computer still doesn't restart properly because I broke something in the boot order trying to fix it. Not a big deal I just enter my username/password in the terminal then type startx.
LexSum Progress
Had to slice to 50 for each rating to save time, but I can probably make it longer for launch. At first I was thinking there would be 60 million entities to process, but actually its more like 900k x 5 (for each rating) and as long as I don't lexsum 1000+ reviews for ratings it should finish in a few days. I reallllly need to add a timer function asap. I can just time 1000 or so products and multiply that by 900k or whatever the total number of products in my database is and I should have a pretty good idea how long it will take.
if len(titles) > 50:
titlejoin = ' '.join(lex_sum(' '.join(titles[:50]), sum_count))
textjoin = ' '.join(lex_sum(' '.join(comments[:50]), sum_count))
else:
titlejoin = ' '.join(lex_sum(' '.join(titles), sum_count))
textjoin = ' '.join(lex_sum(' '.join(comments), sum_count))
I'm thinking I can clean these lines up now that I'm staring at it. Maybe something like:
titlejoin = ' '.join(
lex_sum(' '.join(titles[:min(len(titles), 50)]), sum_count))
textjoin = ' '.join(
lex_sum(' '.join(comments[:min(len(titles), 50)]), sum_count))
My estimated time remaining function adds time elapsed ever ten iterations to a list, takes the last 500 or less of that list and averages them, and finally multiplies that average by the total remaining iterations and displays it in a human readable format:
avg_sec = 0
times = []
start = time.time()
# Display time remaining
if avg_sec:
seconds_left = ((limit - count) / 10) * avg_sec
m, s = divmod(seconds_left, 60)
h, m = divmod(m, 60)
print('Estimated Time Left: {}h {}m {}s'.format(
round(h), round(m), round(s)))
if(not count % 10):
end = time.time()
time_block = end - start
start = end
times.append(time_block)
avg_sec = functools.reduce(
lambda x, y: x + y, times[-min(len(times), 500):]) / len(times[-min(len(times), 500):])
print('Average time per 10:', round(avg_sec, 2), 'seconds')
Another thought I had is that this save_df module I coded (it's at like 400 lines of code already x_x) is actually a crucial part of my ultimate code base. I'm pretty happy that I spent so much time writing it into proper functions.
Nov 012018
So I ran my summarizer yesterday and it took literally all day to run only 200 products through the lex sum function. So I went through my code and added a timer for each major step in the process like so:
start = time.time()
asin_list = get_asins(limit)
end = time.time()
print('Get ASINs: ', end - start)
Turns out it was taking over 60 seconds per query . I did the math and at the rate it was going, it would take almost two years to complete every product in my database. So I started looking around at different ways to group large databases. Turns out databases are a lot more complicated than I believed. It felt like looking for a PHP solution back in high school when I didn't know enough to know what to look for. Finally I stumbled upon a feature called Indexing. First I added the indexing code inside of my script, which had no effect, but it seemed like it had worked properly. Still though I was not going to give up that easy and I decided to open up postgres directly in the terminal and poke around to see if the indexing was applied properly. Turns out that it was not applied at all. Here is the code I used to index the asin table in reviews:
# Remote Connect
postgres psql -U ryan -h 162.196.142.159 -p 5432 databasename
# Display table Indexes
SELECT * FROM pg_indexes WHERE tablename = 'reviews';
# Create Index
CREATE INDEX asin_index ON reviews (asin);
Ureka! It worked, now the script that took all day to run yesterday ran in about a minute flat! That is the biggest difference in performance time I've ever experienced and I cant wait to see where else indexing will help my databases.
Other than that, Erin showed me a bunch of stuff in illustrator and Phototshop.
ctrl+click with select tool enables auto-select
ctrl+d — deselect
ctrl+shift+i — invert selection
ctrl+j — duplicate layer
ctrl+alt+j — duplicate and name layer |
RSA是一种非对称加密算法,基于特殊大整数(两素数乘积)的因式分解难题保证安全性。
生成密钥的步骤如下:
1、选择大素数p,q。为最大限度提高安全性,要求p与q的位数相当;
2、计算n = p * q。n称为模,modules,我们通常说的1024位密钥,指的是n为1024位,即128字节;
3、计算欧拉函数 Ø(n) = (p-1) * (q-1)。因为n为两素数p,q之积;
4、选择公钥参数e,与Ø(n) 互素。常用的e有3、17及65537;
5、求私钥参数d,满足 e * d ≡ 1 mod Ø(n) ;
6、公钥为n,e。私钥为n,d。p和q将不参与加解密运算;
公钥(n,e)加密 message -> cipher: c ≡ m ^ e mod n
私钥(n,d)解密 cipher -> message: m ≡ c ^ d mod n
证明很简单,这也是RSA的数学美感所在:
e * d = 1 mod (n)
=> e * d = 1 mod (p-1)*(q-1)
=> e * d = 1 mod (p-1), AND e * d = 1 mod (q-1)
=> m ^ (e * d) = m mod p, AND m ^ (e * d) = m mod q
=> m ^ (e * d) = m mod (p * q)
对于裸签名,原理上就是将上面的m和c调换,使用私钥加密,公钥解密。需要注意的是,PKI中的数字签名并不仅仅是裸签名。
附上一段POC:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
'''
author: xjump.me#gmail.com
reference:
http://code.activestate.com/recipes/572196-rsa/
'''
def egcd(a,b):
# Extended Euclidean Algorithm
# returns x, y, gcd(a,b) such that ax + by = gcd(a,b)
u, u1 = 1, 0
v, v1 = 0, 1
while b:
q = a // b
u, u1 = u1, u - q * u1
v, v1 = v1, v - q * v1
a, b = b, a - q * b
return u, v, a
def modInverse(e,phi_n):
return egcd(e,phi_n)[0]%phi_n
def main():
p = 79
q = 87
n = p*q
phi_n = (p-1)*(q-1)
e = 19
d = modInverse(e,phi_n)
print "p=%d, q=%d, n=%d, e=%d, phi_n=%d, d=%d" % (p,q,n,e,phi_n,d)
m = 30
c = (m**e) % n # encrypt by public key (n,e)
m1 = (c**d) % n # decrypt by private key (n,d)
print "message is %d, cipher is %d, cipher decrypt out is %d" % (m, c, m1)
print "\n================start encryption loops attack."
save_c = None
c1 = c
while True:
m = c1
c1 = (m**e) % n
save_c = m
print "message is %d, cipher is %d" % (m, c1)
if c1 == c:
print "==================got it! after some encryption loops, m is %d" % (save_c)
break;
main()
|
I have the below code which i'm working to copy(using rsync) the contents From remote host labserver01 and dumps those into the directory /var/log/infoSec/ on the base system from where scripts runs and this works correctly and sends e-mail to the recipients, However i'm also figuring out to include a way to send e-mail even if it fails.
I'm Just wondering if there is better way do this, i'm sure there will be more elegant ways.
Appreciate any idea and review in advance.
#!/usr/bin/python3
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
import smtplib
import subprocess
import argparse
import sys
import os
#Dir Structure
dest_dir = "/infralogs/external_dns_logs"
rsync_user = "root"
email_sender = "dnslogger@udalt.com"
email_receiver = "gusain@udalt.com"
msg = ""
parser = argparse.ArgumentParser()
parser.add_argument("-n","--hosts",dest="hosts",help="enter remote host/hosts name, comma seperated",metavar="HOSTS")
parser.add_argument("-s","--src",dest="source",help="source file/directory",metavar="SOURCE")
parser.add_argument("-e","--exclude",dest="exclude",help="Exclude files/Directories, comma seperated list",metavar="EXCLUDE")
if len(sys.argv) < 7:
print(len(sys.argv))
parser.print_help()
parser.exit()
args = parser.parse_args()
def sync(host,dest_dir):
exclude = ""
if not os.path.exists(dest_dir):
os.mkdir(dest_dir)
if ',' in args.exclude:
for excl in args.exclude.split(','):
exclude = exclude + " --exclude " + excl
cmd = "rsync -e 'ssh -o StrictHostKeyChecking=no' -auPz %s %s@%s:%s %s/"%(exclude,rsync_user,host,args.source,dest_dir)
else:
cmd = "rsync -e 'ssh -o StrictHostKeyChecking=no' -auPz --exclude %s %s@%s:%s %s/"%(args.exclude,rsync_user,host,args.source,dest_dir)
cmd_content = cmd
p = subprocess.Popen(cmd,shell=True)
p.wait()
print("DONE")
return cmd_content + " Rsync process completed." # returns the msg to the caller
msglist = [] # a list to store the cmd_contents for the mail body
if ',' in args.hosts:
for host in args.hosts.split(','):
dest = dest_dir + "/" + host
msglist.append(sync(host,dest))
else:
dest = dest_dir + "/" + args.hosts
msglist.append(sync(args.hosts,dest))
msg = "\n".join(msglist) # combine all cmd_contents, one per line
try:
Mail = smtplib.SMTP('mailserver.global.udalt.com', 25, 'localhost.udalt.com')
mail_obj = MIMEMultipart('alternative')
mail_obj["From"] = email_sender
mail_obj["To"] = email_receiver
mail_obj["Cc"] = "gusain@udalt.com"
mail_obj["Subject"] = "Rsync process completed Successfully."
mail_obj.attach(MIMEText(msg, 'plain'))
Mail.sendmail(from_addr=[email_sender], to_addrs=[email_receiver],msg=mail_obj.as_string())
print("Mail Sent to %s" % (email_sender))
except Exception as error:
print("Mail Failed - {}".format(error))
Command Execution method:
$ /usr/bin/dns_rsync.py -n labserver01 -s /var/log/infoSec/ -e "null" |
Getting started
Spektral is designed according to the guiding principles of Keras to make things extremely simple for beginners while maintaining flexibility for experts.
In this page we will go over the main features of Spektral while creating a graph neural network for graph classification.
Graphs
A graph is a mathematical object that represents relations between objects. We call the objects "nodes" and the relations "edges".
Both the nodes and the edges can have vector features.
In Spektral, graphs are represented with instances of spektral.data.Graph which can contain:
a: theadjacency matrix- usually ascipy.sparsematrix of shape(n_nodes, n_nodes).
x: thenode features- represented by anp.arrayof shape(n_nodes, n_node_features).
e: theedge features- usually represented in a sparse edge list format, with anp.arrayof shape(n_edges, n_edge_features).
y: thelabels- can represent anything, from graph labels to node labels, or even something else.
A graph can have all of these attributes or none of them. Since Graphs are just plain Python objects, you can also add extra attributes if you want. For instance, see graph.n_nodes, graph.n_node_features, etc.
Datasets
The spektral.data.Dataset container provides some useful functionality to manipulate collections of graphs.
Let's load a popular benchmark dataset for graph classification:
>>> from spektral.datasets import TUDataset
>>> dataset = TUDataset('PROTEINS')
>>> dataset
TUDataset(n_graphs=1113)
We can now retrieve individual graphs:
>>> dataset[0]
Graph(n_nodes=42, n_node_features=4, n_edge_features=None, y=[1. 0.])
or shuffle the data:
>>> np.random.shuffle(dataset)
or slice the dataset up into sub-datsets:
>>> dataset[:100]
TUDataset(n_graphs=100)
Datasets also provide methods for applying transforms to each data:
apply(transform)- modifies the dataset in-place, by applying thetransformto each graph;
map(transform)- returns a list obtained by applying thetransformto each graph;
filter(function)- removes from the dataset any graph for whichfunction(graph)isFalse. This is also an in-place operation.
For example, let's modify our dataset so that we only have graphs with less than 500 nodes:
>>> dataset.filter(lambda g: g.n_nodes < 500)
>>> dataset
TUDataset(n_graphs=1111) # removed 2 graphs
Now let's apply some transforms to our graphs. For example, we can modify each graph so that the node features also contain the one-hot-encoded degree of the nodes.
First, we compute the maximum degree of the dataset, so that we know the size of the one-hot vectors:
>>> max_degree = dataset.map(lambda g: g.a.sum(-1).max(), reduce=max)
>>> max_degree
12
Try to go over the lambda function to see what it does. Also, notice that we passed another function to the method with the reduce keyword. Can you guess why?
Now we are ready to augment our node features with the one-hot-encoded degree. Spektral has a lot of pre-implemented transforms that we can use:
>>> from spektral.transforms import Degree
>>> dataset.apply(Degree(max_degree))
We can see that it worked because now we have an extra max_degree + 1 node features:
>>> dataset[0]
Graph(n_nodes=42, n_node_features=17, n_edge_features=None, y=[1. 0.])
Since we will be using a GCNConv layer in our GNN, we also want to follow the original paper that introduced this layer and do some extra pre-processing of the adjacency matrix.
Since this is a fairly common operation, Spektral has a transform to do it:
>>> from spektral.transforms import GCNFilter
>>> dataset.apply(GCNFilter())
Many layers will require you to do some form of preprocessing. If you don't want to go back to the literature every time, every convolutional layer in Spektral has a preprocess(a) method that you can use to transform the adjacency matrix as needed.
Have a look at the handy LayerPreprocess transform.
Creating a GNN
Creating GNNs is where Spektral really shines. Since Spektral is designed as an extension of Keras, you can plug any Spektral layer into a Keras Model without modifications.
We just need to use the functional API because GNN layers usually need two or more inputs (so no Sequential models for now).
For our first GNN, we will create a simple network that first does a bit of graph convolution, then sums all the nodes together (known as "global pooling"), and finally classifies the result with a dense softmax layer.
Oh, and we will also use dropout for regularization.
Let's start by importing the necessary layers:
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, Dropout
from spektral.layers import GCNConv, GlobalSumPool
Now we can use model subclassing to define our model:
class MyFirstGNN(Model):
def __init__(self, n_hidden, n_labels):
super().__init__()
self.graph_conv = GCNConv(n_hidden)
self.pool = GlobalSumPool()
self.dropout = Dropout(0.5)
self.dense = Dense(n_labels, 'softmax')
def call(self, inputs):
out = self.graph_conv(inputs)
out = self.dropout(out)
out = self.pool(out)
out = self.dense(out)
return out
And that's it.
Note how we mixed layers from Spektral and Keras interchangeably: it's all just computation with tensors underneath!
This also means that if you want to break free from Graph and Dataset and every other feature of Spektral, you can.
Note: If you don't want to subclass Model to implement your GNN, you can also use the classical declarative style. You just need to pay attention to the Input and leave "node" dimensions unspecified (so None instead of n_nodes).
Training the GNN
Now we're ready to train the GNN. First, we instantiate and compile our model:
model = MyFirstGNN(32, dataset.n_labels)
model.compile('adam', 'categorical_crossentropy')
and we're almost there!
However, here's where graphs get in our way. Unlike regular data, like images or sequences, graphs cannot be stretched or cut or reshaped so that we can fit them into tensors of pre-defined shape. If a graph has 10 nodes and another one has 4, we have to keep them that way.
This means that iterating over a dataset in mini-batches is not trivial and we cannot simply use the model.fit() method of Keras as-is.
We have to use a data Loader.
Loaders
Loaders iterate over a graph dataset to create mini-batches. They hide a lot of the complexity behind the process, so that you don't need to think about it. You only need to go to this page and read up on data modes, so that you know which loader to use.
Each loader has a load() method that when called will return a data generator that Keras can process.
Since we're doing graph-level classification, we can use a BatchLoader. It's a bit slow and memory intensive (a DisjointLoader would have been better), but it lets us simplify the definition of MyFirstGNN. Again, go read about data modes after this tutorial.
Let's create a data loader:
from spektral.data import BatchLoader
loader = BatchLoader(dataset_train, batch_size=32)
and we can finally train our GNN!
Since loaders are essentially generators, we need to provide the steps_per_epoch keyword to model.fit() and we don't need to specify a batch size:
model.fit(loader.load(), steps_per_epoch=loader.steps_per_epoch, epochs=10)
Done!
Evaluating the GNN
Evaluating the performance of our model, be it for testing or validation, follows a similar workflow.
We create a data loader:
from spektral.data import BatchLoader
loader = BatchLoader(dataset_test, batch_size=32)
and feed it to the model by calling load():
loss = model.evaluate(loader.load(), steps=loader.steps_per_epoch)
print('Test loss: {}'.format(loss))
Node-level learning
Besides learning to predict labels for the whole graph, like in this tutorial, GNNs are very effective at learning to predict labels for each node. This is called "node-level learning" and we usually do it for datasets with one big graph (think a social network).
For example, reproducing the results of the GCN paper for classifying nodes in a citation network can be done with GCNConv layers, the Citation dataset, and a SingleLoader: check out this example.
As a matter of fact, check out all the examples.
Go create!
You are now ready to use Spektral to create your own GNNs.
If you want to build a GNN for a specific task, chances are that everything you need is already in Spektral. Check out the examples for some ideas and practical tips.
Remember to read the data modes section to learn about representing graphs and creating mini-batches.
Make sure to read the documentation, and get in touch on Github if you have a feature that you want to see implemented.
If you want to cite Spektral in your work, refer to our paper:
Graph Neural Networks in TensorFlow and Keras with Spektral
Daniele Grattarola and Cesare Alippi |
Composer is a best dependency manager for PHP. Composer can install, update and pull in all the required PHP packages to your project directory. At the time of installing package, composer will check for dependencies of package and if dependent package are there then it will also install dependencies. In this tutorial, we show you how to install and use Composer on CentOS 7 machine.
Prerequisites
Installing PHP Composer
Follow the below steps to install Composer on your CentOS 7 system:
At first, you have to install the PHP CLI packages and some of dependencies by following commands:
sudo yum install php-cli php-zip wget unzip
Now execute the below command to download the composer installer file:
php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
File composer-setup.php will be downloaded to current working directory.
After that, check the data integrity of the script by comparing the script SHA-384 hash on the Composer Signatures page.
Here, we are going to use wget command to download the signature of the latest Composer installer and will store it in HASH variable as given below:
HASH="$(wget -q -O - https://composer.github.io/installer.sig)"
Now issue the below command to check that the installation script is not corrupted:
php -r "if (hash_file('SHA384', 'composer-setup.php') === '$HASH') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;"
You will get following output if the hashes match:
Installer verified
If the hashes don’t match you will see output like Installer corrupt. If this is case you have to download composer installation script again and check hash value until you get Installer Verified output.
Next, issue the following command to install Composer globally inside /usr/local/bin directory:
sudo php composer-setup.php --install-dir=/usr/local/bin --filename=composer
It will show you output as below:
All settings correct for using Composer Downloading... Composer (version 1.9.0) successfully installed to: /usr/local/bin/composer Use it: php /usr/local/bin/composer
That’s it. Composer is installed as a system-wide and it will be available for all users.
At last, you can verify the installation by typing:
composer
The above command will print details as following:
______
/ ____/___ ____ ___ ____ ____ ________ _____
/ / / __ \/ __ `__ \/ __ \/ __ \/ ___/ _ \/ ___/
/ /___/ /_/ / / / / / / /_/ / /_/ (__ ) __/ /
\____/\____/_/ /_/ /_/ .___/\____/____/\___/_/
/_/
Composer version 1.9.0 2019-08-02 14:12:47
Usage:
command [options] [arguments]
Use Composer in PHP Project
Now the Composer is installed globally on your CentOS system and we will show how to use in your php project.
First of all, you have to create a directory which should project root directory. Create directory my-project as a root directory of your project.
sudo mkdir my-project
cd my-project
The next step is to initialize a new composer.json using the composer require command and specify the package we want to download.
In this example, we will create a sample application to print the current time using carbon package.
Execute the below command to initialize a new composer.json and install the carbon package:
composer require nesbot/carbon
After complete the installation you can see that Composer created two files composer.json and composer.lock, and a vendor directory.
ls -l
-rw-r--r-- 1 tecnstuff users 59 Aug 11 00:50 composer.json -rw-r--r-- 1 tecnstuff users 6820 Aug 11 00:50 composer.lock drwxr-xr-x 4 tecnstuff users 4096 Aug 11 00:50 vendor
The vendordirectory contains project dependencies.
The composer.lockcontains a list of all installed packages including the version of the packages.
The composer.jsondescribes the PHP project and all PHP dependencies.
Now, you have installed carbon package, create a new file named test.php and paste the following code inside the file. It will prints the current time.
<?php
require __DIR__ . '/vendor/autoload.php';
use Carbon\Carbon;
printf("Now: %s", Carbon::now());
Run above script by typing:
php test.php
The output should look like this:
Now: 2019-08-17 09:12:07
After that, if you want to update the package then you can use following command:
composer update
Conclusion
You have successfully learned how to install Composer on your CentOS 7 system. We have also described you how to use Composer to create a simple PHP project. You can get more information about Composer visit the official Composer documentation page.
If you have any question or suggestion please leave comment below. |
So, I saw this fine video on youtube about, building stuff within one day. It was made by KalleHallden and I really enjoyed this video.
But during the video I wondered if his code would work on every workstation. Now I have a Mac, and Kalle does too, but this
path = "/Users/kalle/Documents/Projects/MyProjects/"
Will not work on a Windows machine. Home folders are different on many platforms.
So, why not use a home finding feature of python itself, like :
from pathlib import Path
home = str(Path.home())
So, I was thinking, can I improve this script?
Yes!
So let’s Go!
What’s to be done
So there is this list Kalle created to be done.
Navigate to MyProjects
Create folder with project name
Navigate info folder
Git init
Go to GitHub an create new repository
Copy the remote
Add remote to my local folder
Create readme file
Git add
Git commit
Git push
Code . (open IDE)
Now I think this list can be shorter and easier.
So let’s start by navigating to the projects folder.
Navigate to folder
This piece of code lets you print the home folder. On windows, linux or Mac os
from pathlib import Path
home_folder = str(Path.home())
print(home_folder)
The output, on my Macbook is /Users/theovandersluijs
Let’s say we want to create the new projects in:
[home]/Documents/MyProjects
We can concatenate the home folder easily with the Documents and MyProject folder with the os.path.join statement.
from pathlib import Path
import os
home_folder = str(Path.home()) # this is the users home folder on any OS
my_project = os.path.join(home_folder, "Documents", "MyProjects")
print(my_project)
this gives us /Users/theovandersluijs/Documents/MyProjects
Now let’s say we want to create “New_project” into this folder structure.
Easy we are going to use os.makedirs
from pathlib import Path
import os
home_folder = str(Path.home()) # this is the users home folder on any OS
my_project = os.path.join(home_folder, "Documents", "MyProjects", "New_project")
os.makedirs(my_project, exist_ok=True)
And we are done!
Navigate to MyProjects
Create folder with project name
The first step, we did not have to do. The second step, will also be obsolete in one of the next chapters.
Creating the github repository
Creating a github repository is very easy!
First you need to install the GitHub package for Python. pip install PyGithub
So now that you have your token you can start using the script below.
from github import Github
token = "[YOUR TOKEN]"
user = Github(token).get_user()
name = "New_project"
auto_init = True # creates the Readme file
homepage = "https://www.itheo.nl"
description = "This is a nice description about this project"
private = False
license_template = "cc-by-sa-4.0"
repo = user.create_repo(
name,
auto_init=auto_init,
homepage=homepage,
description=description,
private=private,
license_template=license_template
)
So what does all these vars mean.
Name: The name of the repository ().Required
auto_init: Passtrueto create an initial commit with empty README. Default:false.
homepage: A URL with more information about the repository.
description: A short description of the repository.
private: Eithertrueto create a private repository orfalseto create a public one. Creating private repositories requires a paid GitHub account. Default:false.
license_template: Choose an open source license template that best suits your needs, and then use the license keyword as thelicense_templatestring. For example, “mit” or “mpl-2.0”.
There are various licenses you can choose from. You will find them all here
More information about creating a repository and all the possible variables on GitHub can be found here
If you like to see some output after the script use these:
print(repo.full_name)
print(repo.html_url)
print(repo.ssh_url)
The first will show you the full name of the newly created repository including your username tvdsluijs/New_project
The second shows the html url you can use either to browse to to find your repo or to use to clone your repo https://github.com/tvdsluijs/New_project
The last is the SSH url to clone your repo git@github.com:tvdsluijs/New_project.git And this last statement will come in handy when we want to clone our repo to our harddrive.
We do not need any selenium or beautifullsoup to get any of the needed data from the GitHub page.
So what steps did we do here?
Go to GitHub an create new repository
Create readme file
We actually do not need these with the code I’ve created. - [ ] Copy the remote
- [ ] Git add
- [ ] Git commit
- [ ] Git push
The Clone wars
Well… not really wars, but I just wanted to put a Star Wars item within this article :-)
But it is about cloning. Because we want to Clone the Repository to our harddrive.
Unfortunately there is no way (yet) to clone with pygithub. So we are going to do this by good old os package already within python.
home_folder = str(Path.home())
my_projects_folder = os.path.join(home_folder, "Documents", "MyProjects")
clone = "git clone {}".format(repo.ssh_url)
os.chdir(my_projects_folder)
os.system(clone)
With clone = "git clone {}".format(repo.ssh_url) you specify the ssh_url from GitHub where your repository is.
Specifying the path where the cloned project needs to be copied is done by : os.chdir(my_projects_folder) Do NOT specify the name of your project within this statement. The cloning will create the folder auto-magicly!
And clone the whole shabang with: os.system(clone)
So what steps did we do here?
Create folder with project name
Navigate info folder
Git init
Copy the remote
Add remote to my local folder
So as we did all of this within a small piece of code a lot of items became obsolete within the original ToDo List.
So what is left of the original list
Create new repository
Copy the remote to local folder
DONE!
Well only for the automated opening of the IDE and off course for the bash script for Mac and Windows.
Bash scripting
For windows you need to create a bat file and place that in your windows sys folder or create a path variable to the .bat file so you can run it anywhere.
Second to that you need to know where python.exe is located And last where the python script is located
Your bat script could look like this:
"C:\Users\Theo\AppData\Local\Programs\Python\Python37-32\python.exe" "C:\Users\Theo\Documents\MyProjects\New_project\create.py"
pause
For MacOS you should create a .sh file, some thing like .my_commands.sh
With the following code
#!/bin/bash
function create() {
python /Users/theovandersluijs/Documents/MyProjects/New_project/create.py
}
If you source ~/.my_commands.sh you will be able to start the python script anywhere from you system in a terminal.
Wrapping things up
Now, if you add some input variables to make the script more intuitive and dynamic and put some try, catch and logging into it, slam in into a class with objects, you will get something like my project you can find on GitHub!
Go to my GitHub Page for all the code.
Like the script? Please buy me a coffee for my work. Thank you!!!
Kalle Hallden’s Video
Please watch the video of Kalle below, it’s really nice to see a passionate developer working. |
Nov 032019
Can't install security updates:
sudo apt-get update && sudo apt-get dist-upgrade
first enter the following command in the terminal
sudo rm /var/lib/apt/lists/* -vf
then update your system by entering the following command in the terminal
sudo apt-get update && sudo apt-get upgrade
after this there should be no errors and everything should work fine.
The key(s) in the keyring /etc/apt/trusted.gpg.d/*** are ignored as the file has an unsupported filetype.
Jun 222019
Obviously not every pacakge on github is going to be available via pip, but downloading and installing manually clutters up your project directory. That kind of defeats the purpose of using pipenv in the first place. However, installing a package by using the git uri with pipenv is possible just like it is with pip. Here's what you type:
pipenv install -e git+git://github.com/user/project.git#egg=<project>
Pretty simple right? Here's an example of one that I've used recently just in case:
pipenv install -e git+git://github.com/miso-belica/sumy.git#egg=sumy
Which is the command to install this package: https://github.com/miso-belica/sumy
If you have pipenv command not found use this to fix it:
sudo -H pip install -U pipenv
for scrapy with Python 3, you'll need
sudo apt-get install python3 python-dev python3-dev \
build-essential libssl-dev libffi-dev \
libxml2-dev libxslt1-dev zlib1g-dev \
python-pip
with Python 2, you'll need
sudo apt-get install python-dev \
build-essential libssl-dev libffi-dev \
libxml2-dev libxslt1-dev zlib1g-dev \
python-pip
Jun 112019
I've been coding again and just remembered how well this website works for keeping track of cool tricks I learn. Sometimes it's really hard to find simple and generic examples of things to help teach the fundamentals. I needed to write to a file without opening the text document 1000 times and I finally found a really clean example that helped me understand the pieces.
Edit** Threadpool is a lot easier and you can thread inside a loop:
from multiprocessing.pool import ThreadPool as Pool
threads = 100
p = Pool(threads)
p.map(function, list)
More complicated version:
import threading
lock = threading.Lock()
def thread_test(num):
phrase = "I am number " + str(num)
with lock:
print phrase
f.write(phrase + "\n")
threads = []
f = open("text.txt", 'w')
for i in range (100):
t = threading.Thread(target = thread_test, args = (i,))
threads.append(t)
t.start()
while threading.activeCount() > 1:
pass
else:
f.close()
Close something on Scrapy spider close without using a pipeline:
from scrapy import signals
from scrapy.xlib.pydispatch import dispatcher
class MySpider(CrawlSpider):
def __init__(self):
dispatcher.connect(self.spider_closed, signals.spider_closed)
def spider_closed(self, spider):
# second param is instance of spder about to be closed.
Instead of using an if time or if count to activate something I found a decorator that will make sure the function on runs once:
def run_once(f):
def wrapper(*args, **kwargs):
if not wrapper.has_run:
wrapper.has_run = True
return f(*args, **kwargs)
wrapper.has_run = False
return wrapper
@run_once
def my_function(foo, bar):
return foo+bar
You can also resize the terminal inside the code:
import sys
sys.stdout.write("\x1b[8;{rows};{cols}t".format(rows=46, cols=54))
I got stuck for a while trying to get my repository to let me login without creating an ssh key (super annoying imo) and I figured out that I added the ssh url for the origin url and needed to reset it to the http:
change origin url
git remote set-url origin <url-with-your-username>
Combine mp3 files with linux:
ls *.mp3
sudo apt-get install mp3wrap
mp3wrap output.mp3 *.mp3
Regex is always better than splitting a bunch of times and making the code messy. Plus it's a lot easier to pick up the code later on and figure out what's going on. So I decided to take my regex to the next level and start labeling groups (I'm even going to give it it's very own tag :3:
pat = r'(?<=\,\"searchResults\"\:\{)(?<list_results>.*)(?=\,\"resultsHash\"\:)'
m = re.match(pat, url)
if m:
self.domain = m.group('list_results')
Feb 102019
No password on postgres user fix:
pg_hba.conf
local all all trust
ALTER USER postgres with password 'newpassword';
Then you can add your user account as a superuser:
ALTER ROLE ryan with SUPERUSER;
# then restart the server
sudo /etc/init.d/postgresql restart
You'll probably need to change your pg_hba.conf file back to something like this:
local all trust
host all 127.0.0.1 255.255.255.255 trust
host booktown 192.168.1.3 255.255.255.255 ident sales
host all 192.168.1.4 255.255.255.255 ident audit
Feb 102019
This problem has been driving me nuts for a while now, but on the bright side it caused me to close all of my files by habit. The solution to this problem should have been obvious, but like many things in my life, it just didn't click. The problem is that I was trying to change global OS settings from within a non-root account. You have to actually change the config files from the root account and give all sub-accounts the ability to raise their limits. Also note, that the command "sudo ulimit -n 9000000" does not work.
Solution
Temporarily extend limits by switching to root and typing:
ulimit -n 900000
It's better to extend it on all users though so that you don't have to do everything in root. It took me a while to figure this out because I was changing the config in my user account etc/config files, but you have to switch to root and change the config in there to change the allowed limits.
sudo su root
/etc/security
rmate limits.conf
Then add this to the file and save:
* soft nofile 900000
* hard nofile 900000
<user> soft nofile 900000
<user> hard nofile 900000
root soft nofile 900000
root hard nofile 900000
Dec 222018
I've completely switched over to Windows 10 with WSL on my main development computer and it's going pretty well. I just cant stand coding in Windows because everything is different and nothing works as well as it does on Linux. My job requires a lot of design work so having my home computers on Linux was not very practical. So when I heard about a native Linux sub-system I jumped at it. I will be putting any issues that I solve in this article.
Getting Rsub Working with Windows WSL & Ubuntu 18.04
Add rsub to sublime with package control (on Windows)
Install & configure rmate (on Linux)
Install openssh-server (on Linux)
configure ssh (on Linux)
add bashrc script with sudo and -f (on Linux)
Installing & Rmate
pip install rmate
sudo nano /etc/rmate.rc
127.0.0.1
52698
ctrl+o
ctrl+x
Install & Configure Openssh Server
sudo apt install openssh-server
sudo nano /etc/ssh/sshd_config
Port 2222
ListenAddress 0.0.0.0
Protocol 2
PasswordAuthentication yes
StrictModes no
ctrl+o
ctrl+x
sudo nano /etc/ssh/ssh_config
Host *
RemoteForward 52698 localhost:52698
Port 2222
SendEnv LANG LC_*
HashKnownHosts yes
GSSAPIAuthentication yes
sudo service ssh --full-restart
Bashrc Configurations
sudo nano ~/.bashrc
Open any file with Sublime Text
I plan on expanding this so that it can open on other windows drives like E:/
function subl {
CUR_PATH=`readlink -f $1`
if [[ $CUR_PATH == /mnt/c/* ]]; then
/mnt/c/Program\ Files/Sublime\ Text\ 3/subl.exe "C:${CUR_PATH:6}"
else
sudo rmate $CUR_PATH -f
fi
}
Convert and Open Shell Directory in Explorer
$() runs subshell function but leaves quotes around result
`` double ticks run the wslpath function in a subshell and strips quotes from result
$pwd is in quotes because directory spaces break the wslpath function
/$1 is an optional parameter for a subdir path
open() { explorer.exe `wslpath -w "$PWD"/$1`; }
Handy Bash Aliases
alias bashrc='subl ~/.bashrc' # open bashrc config
alias rbash='. ~/.bashrc' # reset bash shell to use changes
alias rbash='. ~/.bashrc' # reset bashrc in terminal
alias startredis='sudo service redis-server start'
alias stopredis='sudo service redis-server stop'
Windows Python Path Conflicting with Pipenv
This one is pretty annoying. I installed python 3.7 on my windows computer so that I could do linting on Sublime Text and it caused my pipenv to start using that path for the --three tag. I suppose I could have specified a different version, but I assumed there would be a way to turn off the windows python path inside WSL. I tried a few different ways, but none of them worked. I gave up and just made a bash function that points to my linux path:
##! Don't install packages with this, it will break dependency matching
pipenv3() { pipenv --python=/usr/bin/python3 install "[email protected]"; }
Note: bash script variables won't work if you use single quotes like this -> '
Other Things
ConEmu as bash editor
DejaVu Sans Mono font for everything (11pt)
Started saving appdata inside Google Drive
win+x shows "Power Menu"
win+ → or win + ← fits window to half screen
display fusion allows shortcuts on secondary taskbar
stickies — sticky notes minimize to tray
Musicbee — Powerful music player that saves spot
Side Note about Pip
Something that has been bothering me for a while now is whether I should install pipenv with pip or pip3. Turns out that pip is not the python two version of pip, but rather a hybrid of both. So there is pip3, pip, and pip2. So the obvious answer is to install it using plain pip.
"pip3 always operates on the Python3 environment only, as pip2 does with Python2. pip operates on whichever environment is appropriate to the context."
Use "sudo apt install pip" on Ubuntu — Doesn't work well on Mint
Setting up Postgresql Properly
sudo apt install postgresql
sudo service postgresql start
sudo su - postgres
createuser --superuser ryan
psql # <- command line tool for making queries
\password ryan
\q # <- exit psql to create new users/dbs or import/export db's (psql is for sql)
createdb ryan # or whatever# exit and now you can run psql in your own console with your username.
#start automatically
sudo systemctl enable postgresql
Setting up Redis
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install redis-server
sudo service redis-server start
sudo service redis-server stop
sudo service redis-server restart
# running just redis-server will force you to keep a bash window open
# I usually just create a bashrc alias for this /shrug
# for automatically starting redis enter
sudo systemctl enable redis-server
Nov 112018
I noticed that my blog has footprints that show which script I used which might cause someone to seek it out and try and exploit it. So I was digging through my code looking for an error message so I could change it and was getting very frustrated. So I looked into finding strings inside files and it worked instantly:
# Recursive Find
grep -rnw '<file path>' -e '<string to find>'
# Recursive Replace:
grep -lR "<search_phrase>" <file path> | xargs sed -i 's/<search_phrase>/<replace_phrase>/g'
# Current Dir
grep -lR "<search_phrase>" . | xargs sed -i 's/<search_phrase>/<replace_phrase>/g'
Original Post Text
grep -rnw '/path/to/somewhere/' -e 'pattern'
-r or -R is recursive,
-n is line number, and
-w stands for match the whole word.
-l (lower-case L) can be added to just give the file name of matching files.
Along with these, --exclude, --include, --exclude-dir flags could be used for efficient searching:
This will only search through those files which have .c or .h extensions:
grep --include=\*.{c,h} -rnw '/path/to/somewhere/' -e "pattern"
This will exclude searching all the files ending with .o extension:
grep --exclude=*.o -rnw '/path/to/somewhere/' -e "pattern"
For directories it's possible to exclude a particular directory(ies) through --exclude-dir parameter. For example, this will exclude the dirs dir1/, dir2/ and all of them matching *.dst/:
grep --exclude-dir={dir1,dir2,*.dst} -rnw '/path/to/somewhere/' -e "pattern"
This works very well for me, to achieve almost the same purpose like yours.
For more options check man grep.
Nov 102018
Added NTFS folder sharing over the network without actually having user permission of the folder. Here's how I enabled, adding usershare owner only = false below [global]
sudo nano /etc/samba/smb.conf
# Any line which starts with a ; (semi-colon) or a # (hash)
# is a comment and is ignored. In this example we will use a #
# for commentary and a ; for parts of the config file that you
# may wish to enable
#
# NOTE: Whenever you modify this file you should run the command
# "testparm" to check that you have not made any basic syntactic
# errors.
#
#======================= Global Settings =======================
[global]
usershare owner only = false
## Browsing/Identification ###
ctrl + o
Fix NTFS Permissions
Found some hopfully looking insight on how to give user access to mounted drives.
If you mount a partition to a folder within /home/user it will be owned by the user. Here's the line I added to my /etc/fstab.
UUID=9e5bb53c-4443-4124-96a8-baeb804da204 /home/fragos/Data ext4 errors=remount-ro 0 1
Keyword Raking / Splitting
Going to rake keywords from the comments and then use a 1 sentence lexsum of all of the titles for loop display and other stuff.
# Rake keywords
rake = Rake(min_length=2, max_length=6,
ranking_metric=Metric.DEGREE_TO_FREQUENCY_RATIO)
rake.extract_keywords_from_text(textjoin)
sumkeywords.append(' : '.join(rake.get_ranked_phrases()))
Source: https://github.com/csurfer/rake-nltk
I had to change the word tokenizer in the class to the nltk twitter tokenizer so that it wouldn't split words by apostrophes.
from nltk.tokenize import wordpunct_tokenize, TweetTokenizer
tknzr = TweetTokenizer()
...
word_list = [word.lower() for word in tknzr.tokenize(sentence)]
I've also decided to use ' : ' as my official list of terms splitting format. Commas are too common and might add complications in the future.
Flask Dev
I turned the CSV file generated from the lexsum generator to preview the summaries and keyword extraction in the flask app.
# load data and create sub dataframe for product asin
data = pd.read_csv('./static/data/sample-products.csv', index_col=0)
product_comments = data.loc[data['asin'] == asin]
# create variables for each rating
for number in range(1,6):
current = product_comments.loc[product_comments['rating'] == number]
product['{}_keywords'.format(number)] = current['keywords'].tolist()[0]
product['{}_title'.format(number)] = current['title'].tolist()[0]
product['{}_text'.format(number)] = current['text'].tolist()[0]
# load variables inside flask template
<p>{{product['4_text']}}</p>
<p><strong>{{product['4_keywords']}}</strong></p>
Nov 092018
I finally got around to working on my Amazon project again.
Misc Notes
# Change postgres data directory
File path:
/etc/postgresql/10/main/postgresql.conf
File System Headache
I decided to clean up my hard drives, but I forgot how much of a headache it was trying to get an NTFS drive to work with transmission-daemon. Whatever I'll just save to my EX4 partition for now and fix it later.
*Update
I bricked my OS install and had to go down a 3 hour nightmare trying to fix it. I eventually discovered that it was a label from my old partition mount point in the fstab file. Solution:
sudo nano /etc/fstab
# comment out old label
ctrl + o to save
ctrl + x to exit
reboot
My computer still doesn't restart properly because I broke something in the boot order trying to fix it. Not a big deal I just enter my username/password in the terminal then type startx.
LexSum Progress
Had to slice to 50 for each rating to save time, but I can probably make it longer for launch. At first I was thinking there would be 60 million entities to process, but actually its more like 900k x 5 (for each rating) and as long as I don't lexsum 1000+ reviews for ratings it should finish in a few days. I reallllly need to add a timer function asap. I can just time 1000 or so products and multiply that by 900k or whatever the total number of products in my database is and I should have a pretty good idea how long it will take.
if len(titles) > 50:
titlejoin = ' '.join(lex_sum(' '.join(titles[:50]), sum_count))
textjoin = ' '.join(lex_sum(' '.join(comments[:50]), sum_count))
else:
titlejoin = ' '.join(lex_sum(' '.join(titles), sum_count))
textjoin = ' '.join(lex_sum(' '.join(comments), sum_count))
I'm thinking I can clean these lines up now that I'm staring at it. Maybe something like:
titlejoin = ' '.join(
lex_sum(' '.join(titles[:min(len(titles), 50)]), sum_count))
textjoin = ' '.join(
lex_sum(' '.join(comments[:min(len(titles), 50)]), sum_count))
My estimated time remaining function adds time elapsed ever ten iterations to a list, takes the last 500 or less of that list and averages them, and finally multiplies that average by the total remaining iterations and displays it in a human readable format:
avg_sec = 0
times = []
start = time.time()
# Display time remaining
if avg_sec:
seconds_left = ((limit - count) / 10) * avg_sec
m, s = divmod(seconds_left, 60)
h, m = divmod(m, 60)
print('Estimated Time Left: {}h {}m {}s'.format(
round(h), round(m), round(s)))
if(not count % 10):
end = time.time()
time_block = end - start
start = end
times.append(time_block)
avg_sec = functools.reduce(
lambda x, y: x + y, times[-min(len(times), 500):]) / len(times[-min(len(times), 500):])
print('Average time per 10:', round(avg_sec, 2), 'seconds')
Another thought I had is that this save_df module I coded (it's at like 400 lines of code already x_x) is actually a crucial part of my ultimate code base. I'm pretty happy that I spent so much time writing it into proper functions.
Oct 282018
Today was decent. I managed to fix my main linux install by removing the dpdk architecture that was breaking it, hallelujah!
sudo apt-get purge ".*:ppc64el"
sudo dpkg --remove-architecture ppc64el
I got cuda samples running on it too and have my scraping back up and running. Oh, Amazon changed their date format slightly, but that was an easy fix.
I've been thinking a lot about the party last night and am really going to have to start working on the visualize technique Ron suggested.
Dad and I went over some potential pen holder kit ideas and I think we have a good handle on what we want to do. |
Puedo apreciar la matemática de la derivación, pero ¿alguien puede explicar esto de una forma más intuitiva sentido?
Me vienen a menudo a través de la creencia errónea de que, debido a replicar la cartera de ser mucho más baja de los contratos de boca de los contratos, la varianza de intercambio es mucho sesgo. Pero esto es incorrecto, estas ponderaciones son sólo para asegurarse de que el $vega de exposición es igual tanto a la baja como al alza.
Es algo para hacer con volga, es decir, la vega de la mayor vol desventaja contratos aumentará más rápido (debido a un aumento de sesgo) de la vega de la parte inferior vol contratos disminuirá?
Puedo apreciar la matemática de la derivación, pero ¿alguien puede explicar esto de una forma más intuitiva sentido?
Respuestas
¿Demasiados anuncios?
Como he mencionado en un comentario, sería erróneo pensar que entrar en un swap de varianza específicamente los montos a ser "largo inclinación".
Lo que usted puede decir, sin embargo, es que, en ausencia de saltos (es decir, en una difusión pura marco, ver aquí y aquí para más info), la feria de la varianza de la huelga de $K_{var}$ en el que una variación de intercambio con nocional $N$ y la rentabilidad
$$ N \times ( \sigma^2_{\text{di cuenta}}(0,T) - K_{var} ) $$
debe el comercio en la siguiente tasa nominal (o la varianza de la huelga)
$$ K_{var} = \frac{2}{B(0,T)T} \left[ \int_0^{F(0,T)} \frac{P(K,T)}{K^2} dK + \int_{F(0,T)}^\infty \frac{C(K,T)}{K^2} dK \derecho] $$donde $T$ cifras del contrato fecha de vencimiento, $\sigma^2_{\text{di cuenta}}(0,T)$ la varianza de registro-los rendimientos que se va a realizar en el horizonte de $[0,T]$, $B(0,T)$ el factor de descuento, $P(K,T)$ y $C(K,T)$ opción Europea de los precios con la huelga de $K$ y madurez $T$ y $F(0,T)$ el precio a futuro.
Por lo tanto, el precio de una variación de intercambio es simplemente una escala integral de la OTMF precio de la curva:$$ K_{var} \propto \int_0^\infty \frac{V(K,T)}{K^2} dK $$$$ V(K,T) = \begin{casos} P(K,T) & \text{si } K < F(0,T) \\ C(K,T) & \text{en caso contrario} \end{casos} $$
Ahora, supongamos la siguiente situación donde, $S_0=100$, $r=q=0$ (no neutrales al riesgo de deriva), $T=1$, junto con 3 formas de volatilidad implícita sonrisa en $T$: plana, puro sesgo, puro convexidad. Si calcular la feria de la varianza de la huelga de $K_{var}$ en diferentes configuraciones, usted verá que tanto asimetría negativa y positiva de la convexidad tener un impacto positivo y no específicamente sesgar como parecen indicar. Ver las simulaciones por debajo de donde he expresado la "variación de precios" como $\sqrt{K_{var}}\times 100$ de manera similar a lo que se hace por la volatilidad de los índices, como el índice VIX.
Si usted toma Quantuple las cosas un poco más, usted puede realmente ver si va a largo de sesgo. Puede ver fácilmente que la dependencia de la convexidad demasiado (aunque debería ser obvio que va a largo convexidad).
Así, en primer lugar, necesitamos algunos sonrisa parametrisation que nos permite controlar fácilmente la convexidad y el sesgo. Me fui con un uno;
$$\mathrm{convexidad} = \mathrm{C} = \left. \frac{\partial^2 \sigma}{\partial K^2} \right|_{K=F} \\\mathrm{sesgar} = \mathrm{S} = \left. \frac{\partial \sigma}{\partial K} \derecho|_{K=F} \\\sigma_{\mathrm{atm}} = \sigma(F)$$
lo que da:
$$\frac{1}{2} C (K-F)^2 + S(K-F) + \sigma_\mathrm{atm}$$
*Tenga en cuenta que entiendo que este no es un buen sonrisa, estoy usando como un simple ejemplo.
Entonces, si usted mira una bonita gama extrema de las sonrisas;
$$-0.001 \leqslant \mathrm{S} \leqslant 0.001\\0 \leqslant \mathrm{C} \leqslant 0.0001\\\sigma_\mathrm{atm} = 20\%$$
tienes un montón de imágenes como Quantuple en la otra respuesta:
Pero, que podemos hacer mejor:
Así que, sí, son mucho sesgo. Pero sólo una muy pequeña cantidad - es mucho más que la convexidad.
Aquí está el código de python para esto si quieres tener un lío con ella.
import numpy as np
def CND(X):
a1,a2,a3,a4,a5 = 0.31938153, -0.356563782, 1.781477937, -1.821255978, 1.330274429
L = np.abs(X)
K = 1.0 / (1.0 + 0.2316419 * L)
w = 1.0 - 1.0 / np.sqrt(2*np.pi)*np.exp(-L*L/2.) * (a1*K + a2*K*K + a3*np.power(K,3) + a4*np.power(K,4) + a5*np.power(K,5))
if X<0:
w = 1.0-w
return w
def BlackSholes(cp,S,X,T,r,v):
d1 = (np.log(S/X)+(r+v*v/2.)*T)/(v*np.sqrt(T))
d2 = d1-v*np.sqrt(T)
if cp=='c':
return S*CND(d1)-X*np.exp(-r*T)*CND(d2)
else:
return X*np.exp(-r*T)*CND(-d2)-S*CND(-d1)
def C(S,X,T,r,v):
return BlackSholes("c", S, X, T, r, v)
def P(S,X,T,r,v):
return BlackSholes("p", S, X, T, r, v)
def B(r,t):
return np.exp(-r*t)
def vol(k, vol_atm, convexity, skew, atm=100, max_vol=1):
v = 0.5*convexity*k**2 + (skew - convexity*atm)*k + vol_atm + 0.5*convexity*atm**2 - skew*atm
return max(1e-5,min(v, max_vol))
import scipy.integrate as integrate
import scipy.special as special
def var_swap(S,T,r,atm_vol, convexity, skew):
F = S/B(r,T)
return np.sqrt((2 / (T*B(r,T))) * (integrate.quad(lambda k: P(S, k, T, r, vol(k, atm_vol, convexity, skew, atm=F)) * k**-2, 0, F)[0] + integrate.quad(lambda k: C(S, k, T, r, vol(k, atm_vol, convexity, skew, atm=F)) * k**-2, F, F*5)[0]))
r = 0.0
T = 1.0
S = 100.0
F = S/B(r,T)
print F
atm_vol = 0.2
convexity = 0.0001
skew = 0.001
ks = [k for k in range(1, int(F*2))]
n_scenarios = 20
skews = np.linspace(-skew, skew, n_scenarios)
convexities = np.linspace(0, convexity, n_scenarios)
plot_smiles = False
if plot_smiles:
import colorsys
blues = [colorsys.hsv_to_rgb(h, 1, 1) for h in np.linspace(0.5, 0.65, n_scenarios)]
reds = [colorsys.hsv_to_rgb(h, 1, 1) for h in np.linspace(0.0, 0.15, n_scenarios)]
from matplotlib import pyplot
fig = pyplot.figure()
ax_smiles = fig.add_subplot(1,1,1)
ax_opts = ax_smiles.twinx()
for i, (convexity, skew) in enumerate(zip(convexities, skews)):
vols = [vol(k, atm_vol, convexity, skew, atm=F) for k in ks]
opts = [BlackSholes("p" if k < F else "c", S, k, T, r, vol(k, atm_vol, convexity, skew, atm=F)) * k**-2 for k in ks]
ax_smiles.plot(ks, vols, color=blues[i])
ax_opts.plot(ks, opts, color=reds[i])
pyplot.show()
else:
CC = np.linspace(0, convexity,n_scenarios)
SS = np.linspace(-skew, skew,n_scenarios)
CC, SS = np.meshgrid(CC, SS)
VV = np.empty(CC.shape)
for i in range(CC.shape[0]):
for j in range(CC.shape[1]):
VV[i,j] = var_swap(S, T, r, atm_vol, CC[i,j], SS[i,j])
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.gca(projection='3d')
surf = ax.plot_surface(CC, SS, VV, rstride=1, cstride=1, cmap=cm.jet, linewidth=0, antialiased=True)
ax.set_xlabel("Convexity")
ax.set_ylabel("Skew")
ax.set_zlabel("Var. Swap par rate")
ax.set_ylim(ax.get_ylim()[::-1])
plt.show()
Hay un problema aquí, sin embargo - este es el valor teórico de una variación de intercambio. El mercado no es el comercio de estas de acuerdo con el valor teórico, hay una diferencia que no he visto una forma de contabilizar todavía. La solución a esto es que puede almacenar una tabla de varswap la par de los diferenciales de tasas que pueden ser interpelados y se aplica a varswaps en las correspondientes fechas de inicio y fin.
Esta difusión no viene de la estocástico vol, me parece ser una especie de seguro contra el desagradablemente grandes costos cuando algo sucede. La alternativa es el comercio de corredor de swaps de varianza (es decir, la varianza sólo se devenga cuando el índice está dentro de un corredor), para limitar la probabilidad de este inconveniente.
Usted puede conseguir fácilmente sesgar la exposición con oficios como el de arriba, aunque, si la varianza es sólo acumular cuando el subyacente está por encima/por debajo de un cierto nivel, entonces usted va a ser largo/corto de sesgo cuando el subyacente es cerca de la barrera - porque cuando sólo estás buscando en uno de los lados de un punto, el sesgo y la convexidad tener efectos similares.
Además de las respuestas ya dadas, otra forma de ver esto en el contexto de un modelo de volatilidad estocástica es la siguiente:
El sesgo está fuertemente influenciado por la correlación entre el lugar y la volatilidad. Sin embargo, el precio de un derivado de la volatilidad no depende del parámetro de correlación. Por lo tanto, una variación de intercambio no es largo (o corto) skew / independiente del parámetro de correlación, como es la volatilidad de swap y otros pura volatilidad de los derivados.
Ella (la varianza de la huelga y vol derivados en general) no depende de la convexidad a pesar de (que es determinado por el vol de vol). |
There are multiple changes in TensorFlow 2.0 to make TensorFlow users more productive. TensorFlow 2.0 removes redundant APIs, makes APIs more consistent (Unified RNNs, Unified Optimizers), and better integrates with the Python runtime with Eager execution.
Many RFCs have explained the changes that have gone into making TensorFlow 2.0. This guide presents a vision for what development in TensorFlow 2.0 should look like. It's assumed you have some familiarity with TensorFlow 1.x.
A brief summary of major changes
API Cleanup
Many APIs are eithergone or movedin TF 2.0. Some of the major changes include removing tf.app, tf.flags, andtf.logging in favor of the now open-sourceabsl-py, rehoming projects that lived intf.contrib, and cleaning up the main tf.* namespace by moving lesser usedfunctions into subpackages like tf.math. Some APIs have been replaced withtheir 2.0 equivalents - tf.summary, tf.keras.metrics, andtf.keras.optimizers. The easiest way to automatically apply these renamesis to use the v2 upgrade script.
Eager execution
TensorFlow 1.X requires users to manually stitch together anabstract syntax tree (thegraph) by making tf.* API calls. It then requires users to manually compilethe abstract syntax tree by passing a set of output tensors and input tensors toa session.run() call. TensorFlow 2.0 executes eagerly (like Python normallydoes) and in 2.0, graphs and sessions should feel like implementation details.
One notable byproduct of eager execution is that tf.control_dependencies() isno longer required, as all lines of code execute in order (within atf.function, code with side effects execute in the order written).
No more globals
TensorFlow 1.X relied heavily on implicitly global namespaces. When you calledtf.Variable(), it would be put into the default graph, and it would remainthere, even if you lost track of the Python variable pointing to it. You couldthen recover that tf.Variable, but only if you knew the name that it had beencreated with. This was difficult to do if you were not in control of thevariable's creation. As a result, all sorts of mechanisms proliferated toattempt to help users find their variables again, and for frameworks to finduser-created variables: Variable scopes, global collections, helper methods liketf.get_global_step(), tf.global_variables_initializer(), optimizersimplicitly computing gradients over all trainable variables, and so on.TensorFlow 2.0 eliminates all of these mechanisms(Variables 2.0 RFC) in favorof the default mechanism: Keep track of your variables! If you lose track of atf.Variable, it gets garbage collected.
The requirement to track variables creates some extra work for the user, but with Keras objects (see below), the burden is minimized.
Functions, not sessions
A session.run() call is almost like a function call: You specify the inputsand the function to be called, and you get back a set of outputs. In TensorFlow2.0, you can decorate a Python function using tf.function() to mark it for JITcompilation so that TensorFlow runs it as a single graph(Functions 2.0 RFC). Thismechanism allows TensorFlow 2.0 to gain all of the benefits of graph mode:
Performance: The function can be optimized (node pruning, kernel fusion, etc.)
Portability: The function can be exported/reimported (SavedModel 2.0 RFC), allowing users to reuse and share modular TensorFlow functions.
# TensorFlow 1.X
outputs = session.run(f(placeholder), feed_dict={placeholder: input})
# TensorFlow 2.0
outputs = f(input)
With the power to freely intersperse Python and TensorFlow code, users can take advantage of Python's expressiveness. But portableTensorFlow executes in contexts without a Python interpreter, such as mobile, C++, andJavaScript. To help users avoid having to rewrite their code when adding @tf.function,AutoGraph converts a subset ofPython constructs into their TensorFlow equivalents:
for/while->tf.while_loop(breakandcontinueare supported)
if->tf.cond
for _ in dataset->dataset.reduce
AutoGraph supports arbitrary nestings of control flow, which makes it possible to performantly and concisely implement many complex ML programs such as sequence models, reinforcement learning, custom training loops, and more.
Recommendations for idiomatic TensorFlow 2.0
Refactor your code into smaller functions
A common usage pattern in TensorFlow 1.X was the "kitchen sink" strategy, wherethe union of all possible computations was preemptively laid out, and thenselected tensors were evaluated via session.run(). In TensorFlow 2.0, usersshould refactor their code into smaller functions that are called as needed. Ingeneral, it's not necessary to decorate each of these smaller functions withtf.function; only use tf.function to decorate high-level computations - forexample, one step of training or the forward pass of your model.
Use Keras layers and models to manage variables
Keras models and layers offer the convenient variables andtrainable_variables properties, which recursively gather up all dependentvariables. This makes it easy to manage variables locally to where they arebeing used.
Contrast:
def dense(x, W, b):
return tf.nn.sigmoid(tf.matmul(x, W) + b)
@tf.function
def multilayer_perceptron(x, w0, b0, w1, b1, w2, b2 ...):
x = dense(x, w0, b0)
x = dense(x, w1, b1)
x = dense(x, w2, b2)
...
# You still have to manage w_i and b_i, and their shapes are defined far away from the code.
with the Keras version:
# Each layer can be called, with a signature equivalent to linear(x)
layers = [tf.keras.layers.Dense(hidden_size, activation=tf.nn.sigmoid) for _ in range(n)]
perceptron = tf.keras.Sequential(layers)
# layers[3].trainable_variables => returns [w3, b3]
# perceptron.trainable_variables => returns [w0, b0, ...]
Keras layers/models inherit from tf.train.Checkpointable and are integratedwith @tf.function, which makes it possible to directly checkpoint or exportSavedModels from Keras objects. You do not necessarily have to use Keras's.fit() API to take advantage of these integrations.
Here's a transfer learning example that demonstrates how Keras makes it easy to collect a subset of relevant variables. Let's say you're training a multi-headed model with a shared trunk:
trunk = tf.keras.Sequential([...])
head1 = tf.keras.Sequential([...])
head2 = tf.keras.Sequential([...])
path1 = tf.keras.Sequential([trunk, head1])
path2 = tf.keras.Sequential([trunk, head2])
# Train on primary dataset
for x, y in main_dataset:
with tf.GradientTape() as tape:
# training=True is only needed if there are layers with different
# behavior during training versus inference (e.g. Dropout).
prediction = path1(x, training=True)
loss = loss_fn_head1(prediction, y)
# Simultaneously optimize trunk and head1 weights.
gradients = tape.gradient(loss, path1.trainable_variables)
optimizer.apply_gradients(zip(gradients, path1.trainable_variables))
# Fine-tune second head, reusing the trunk
for x, y in small_dataset:
with tf.GradientTape() as tape:
# training=True is only needed if there are layers with different
# behavior during training versus inference (e.g. Dropout).
prediction = path2(x, training=True)
loss = loss_fn_head2(prediction, y)
# Only optimize head2 weights, not trunk weights
gradients = tape.gradient(loss, head2.trainable_variables)
optimizer.apply_gradients(zip(gradients, head2.trainable_variables))
# You can publish just the trunk computation for other people to reuse.
tf.saved_model.save(trunk, output_path)
Combine tf.data.Datasets and @tf.function
When iterating over training data that fits in memory, feel free to use regularPython iteration. Otherwise, tf.data.Dataset is the best way to streamtraining data from disk. Datasets areiterables (not iterators),and work just like other Python iterables in Eager mode. You can fully utilizedataset async prefetching/streaming features by wrapping your code intf.function(), which replaces Python iteration with the equivalent graphoperations using AutoGraph.
@tf.function
def train(model, dataset, optimizer):
for x, y in dataset:
with tf.GradientTape() as tape:
# training=True is only needed if there are layers with different
# behavior during training versus inference (e.g. Dropout).
prediction = model(x, training=True)
loss = loss_fn(prediction, y)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
If you use the Keras .fit() API, you won't have to worry about datasetiteration.
model.compile(optimizer=optimizer, loss=loss_fn)
model.fit(dataset)
Take advantage of AutoGraph with Python control flow
One common place where data-dependent control flow appears is in sequencemodels. tf.keras.layers.RNN wraps an RNN cell, allowing you to eitherstatically or dynamically unroll the recurrence. For demonstration's sake, youcould reimplement dynamic unroll as follows:
class DynamicRNN(tf.keras.Model):
def __init__(self, rnn_cell):
super(DynamicRNN, self).__init__(self)
self.cell = rnn_cell
def call(self, input_data):
# [batch, time, features] -> [time, batch, features]
input_data = tf.transpose(input_data, [1, 0, 2])
outputs = tf.TensorArray(tf.float32, input_data.shape[0])
state = self.cell.zero_state(input_data.shape[1], dtype=tf.float32)
for i in tf.range(input_data.shape[0]):
output, state = self.cell(input_data[i], state)
outputs = outputs.write(i, output)
return tf.transpose(outputs.stack(), [1, 0, 2]), state
For a more detailed overview of AutoGraph's features, see the guide.
tf.metrics aggregates data and tf.summary logs them
To log summaries, use tf.summary.(scalar|histogram|...) and redirect it to awriter using a context manager. (If you omit the context manager, nothinghappens.) Unlike TF 1.x, the summaries are emitted directly to the writer; thereis no separate "merge" op and no separate add_summary() call, which means thatthe step value must be provided at the callsite.
summary_writer = tf.summary.create_file_writer('/tmp/summaries')
with summary_writer.as_default():
tf.summary.scalar('loss', 0.1, step=42)
To aggregate data before logging them as summaries, use tf.metrics. Metricsare stateful: They accumulate values and return a cumulative result when youcall .result(). Clear accumulated values with .reset_states().
def train(model, optimizer, dataset, log_freq=10):
avg_loss = tf.keras.metrics.Mean(name='loss', dtype=tf.float32)
for images, labels in dataset:
loss = train_step(model, optimizer, images, labels)
avg_loss.update_state(loss)
if tf.equal(optimizer.iterations % log_freq, 0):
tf.summary.scalar('loss', avg_loss.result(), step=optimizer.iterations)
avg_loss.reset_states()
def test(model, test_x, test_y, step_num):
# training=False is only needed if there are layers with different
# behavior during training versus inference (e.g. Dropout).
loss = loss_fn(model(test_x, training=False), test_y)
tf.summary.scalar('loss', loss, step=step_num)
train_summary_writer = tf.summary.create_file_writer('/tmp/summaries/train')
test_summary_writer = tf.summary.create_file_writer('/tmp/summaries/test')
with train_summary_writer.as_default():
train(model, optimizer, dataset)
with test_summary_writer.as_default():
test(model, test_x, test_y, optimizer.iterations)
Visualize the generated summaries by pointing TensorBoard at the summary log directory:
tensorboard --logdir /tmp/summaries
Use tf.config.experimental_run_functions_eagerly() when debugging
In TensorFlow 2.0, Eager execution lets you run the code step-by-step to inspectshapes, data types and values. Certain APIs, like tf.function, tf.keras,etc. are designed to use Graph execution, for performance and portability.When debugging, use tf.config.experimental_run_functions_eagerly(True) touse Eager execution inside this code.
For example:
@tf.function
def f(x):
if x > 0:
import pdb
pdb.set_trace()
x = x + 1
return x
tf.config.experimental_run_functions_eagerly(True)
f(tf.constant(1))
>>> f()
-> x = x + 1
(Pdb) l
6 @tf.function
7 def f(x):
8 if x > 0:
9 import pdb
10 pdb.set_trace()
11 -> x = x + 1
12 return x
13
14 tf.config.experimental_run_functions_eagerly(True)
15 f(tf.constant(1))
[EOF]
This also works inside Keras models and other APIs that support Eager execution:
class CustomModel(tf.keras.models.Model):
@tf.function
def call(self, input_data):
if tf.reduce_mean(input_data) > 0:
return input_data
else:
import pdb
pdb.set_trace()
return input_data // 2
tf.config.experimental_run_functions_eagerly(True)
model = CustomModel()
model(tf.constant([-2, -4]))
>>> call()
-> return input_data // 2
(Pdb) l
10 if tf.reduce_mean(input_data) > 0:
11 return input_data
12 else:
13 import pdb
14 pdb.set_trace()
15 -> return input_data // 2
16
17
18 tf.config.experimental_run_functions_eagerly(True)
19 model = CustomModel()
20 model(tf.constant([-2, -4]))
|
Получение порядкового номера QTreeView, с родителем и без
2 В
В В В
Получение порядкового номера QTreeView, с родителем и без
poluna
16.12.2015, 12:36
Сообщение #11
Студент
Группа: Участник
Сообщений: 27
Регистрация: 5.10.2015
Пользователь №: 4458
Спасибо сказали: 1 раз(а)
Репутация:
0В В
Теперь я поняла тебя.
Вариант неплох, и вроде как даже проще в реализации.
Всё, вопрос пока снимаю. Знаю как реализовывать!
lanz
16.12.2015, 12:44
Сообщение #12
Старейший участник
Группа: Участник
Сообщений: 690
Регистрация: 28.12.2012
Пользователь №: 3660
Спасибо сказали: 113 раз(а)
Репутация:
8В В
Цитата
"внемодельное" дерево
Ойойой, не слушайте его, он вас плохому научит!
По идее модель в комбобоксе и модель в дереве слева - это должна быть одна и та же модель.
Поэтому индекс от одного должен подходить к индексу от другого.
Т.е. вы сначала получаете выделенный индекс от дерева
(назовем его idx)
потом в комбобоксе делаете
combo->setRootModelIndex(idx.parent())
combo->setCurrentIndex(idx.row())
http://doc.qt.io/qt-4.8/qcombobox.html#setRootModelIndex
http://doc.qt.io/qt-4.8/qcombobox.html#currentIndex-prop
Алексей1153
16.12.2015, 13:00
Сообщение #13
фрилансер
Группа: Участник
Сообщений: 2906
Регистрация: 19.6.2010
�з: Обливион
Пользователь №: 1822
Спасибо сказали: 215 раз(а)
Репутация:
34  lanz, да можно и в модели хранить, но я так не люблю делать, это же неудобно ))lanz, оно, вообще говоря, так и происходит - противоречий нету, но некоторые операции по своему контейнеру удобнее производить
А в данном случае твой вариант лучше будет, конечно )
Сообщение отредактировал
Алексей1153 - 16.12.2015, 13:08
ViGOur
16.12.2015, 13:27
Сообщение #14
Мастер
Группа: Модератор
Сообщений: 3293
Регистрация: 9.10.2007
�з: Москва
Пользователь №: 4
Спасибо сказали: 231 раз(а)
Репутация:
40В В
А по моему
Алексей1153 предложил хороший способ, я сам подобным же пользуюсь.
Есть список (QList) или дерево(QMap), которое откуда-то загружается и которое отображается в модели. Очень удобно добавлять, редактировать, удалять. А модель это же абстракция и она не должна по идее хранить данные, как и вид.
lanz, выше сказано
Есть у меня класс TreeComboBox, как можно понять из названия в QComboBox у меня находится QTreeView.
и как я понимаю твой метод идеально подходит для QComboBox, но не для переопределенного класса. Дождемся автора, что она скажет!
poluna
16.12.2015, 13:34
Сообщение #15
Студент
Группа: Участник
Сообщений: 27
Регистрация: 5.10.2015
Пользователь №: 4458
Спасибо сказали: 1 раз(а)
Репутация:
0  lanz, если в комбобоксе стандартными средствами можно показать дерево, то твой метод подойдет, но я не смогла.
Как я поняла для показа дерева в комбобокс нужно переопределять класс, я сделала так:
#! /usr/bin/python
# -*- coding: UTF-8 -*-
from PyQt4 import QtCore, QtGui
class TreeComboBox(QtGui.QComboBox):
def __init__(self, parent=None):
super(QtGui.QComboBox, self).__init__(parent)
self._skipNextHide = False
self._treeView = QtGui.QTreeView(self)
self.setView(self._treeView)
self._treeView.header().hide()
self._treeView.viewport().installEventFilter(self)
def eventFilter( self, object, event):
if event.type() == QtCore.QEvent.MouseButtonPress and object == self.view().viewport():
index = self.view().indexAt(event.pos())
if not self.view().visualRect(index).contains(event.pos()):
self._skipNextHide = True
return False
def showPopup(self):
self.setRootModelIndex(QtCore.QModelIndex())
self._treeView.expandAll()
QtGui.QComboBox.showPopup(self)
def hidePopup(self):
self.setRootModelIndex(self.view().currentIndex().parent())
self.setCurrentIndex(self.view().currentIndex().row())
if self._skipNextHide:
self._skipNextHide = False
else:
QtGui.QComboBox.hidePopup(self)
если я не права, то буду только рада, сразу куча проблем исчезнет!
Но пока не знаю как!
lanz
16.12.2015, 14:53
Сообщение #16
Старейший участник
Группа: Участник
Сообщений: 690
Регистрация: 28.12.2012
Пользователь №: 3660
Спасибо сказали: 113 раз(а)
Репутация:
8  poluna, ну у меня ваш код вроде работает как надо, ЧЯДНТ?
Немножко поменял hidePopup, чтобы он сразу все не корячил:
def hidePopup(self):
if self._skipNextHide:
self._skipNextHide = False
else:
self.setRootModelIndex(self.view().currentIndex().parent())
self.setCurrentIndex(self.view().currentIndex().row())
QtGui.QComboBox.hidePopup(self)
poluna
16.12.2015, 16:06
Сообщение #17
Студент
Группа: Участник
Сообщений: 27
Регистрация: 5.10.2015
Пользователь №: 4458
Спасибо сказали: 1 раз(а)
Репутация:
0В В
Все, поняла, все работает!
Выкладываю работающий пример, так же на python:
пример
#! /usr/bin/python
# -*- coding: UTF-8 -*-
import sys
from PyQt4 import QtCore, QtGui
class TreeComboBox(QtGui.QComboBox):
def __init__(self, parent=None):
super(QtGui.QComboBox, self).__init__(parent)
self._skipNextHide = False
self._treeView = QtGui.QTreeView(self)
self.setView(self._treeView)
self._treeView.header().hide()
self._treeView.viewport().installEventFilter(self)
def eventFilter( self, object, event):
if event.type() == QtCore.QEvent.MouseButtonPress and object == self.view().viewport():
index = self.view().indexAt(event.pos())
if not self.view().visualRect(index).contains(event.pos()):
self._skipNextHide = True
return False
def showPopup(self):
self.setRootModelIndex(QtCore.QModelIndex())
self._treeView.expandAll()
QtGui.QComboBox.showPopup(self)
def hidePopup(self):
if self._skipNextHide:
self._skipNextHide = False
else:
self.setRootModelIndex(self.view().currentIndex().parent())
self.setCurrentIndex(self.view().currentIndex().row())
QtGui.QComboBox.hidePopup(self)
class Main(QtGui.QWidget):
def __init__(self, parent=None):
QtGui.QWidget.__init__(self, parent)
self._comboBox = TreeComboBox(self)
self._treeView = QtGui.QTreeView(self)
layout = QtGui.QVBoxLayout()
layout.addWidget(self._comboBox)
layout.addWidget(self._treeView)
self.setLayout(layout)
model = QtGui.QStandardItemModel()
for a in range(3):
i = QtGui.QStandardItem('Item ' + str(a))
for b in range(3):
ii = QtGui.QStandardItem('sub 1 Item ' + str(b))
i.setChild(b, ii)
for c in range(3):
iii = QtGui.QStandardItem('sub 2 Item ' + str(c))
ii.setChild(c, iii)
model.appendRow(i)
self._comboBox.setModel(model)
self._treeView.setModel(model)
self.connect(self._treeView, QtCore.SIGNAL("clicked(const QModelIndex&)"), self.comboSelect)
def comboSelect(self, idx):
self._comboBox.setRootModelIndex(idx.parent())
self._comboBox.setCurrentIndex(idx.row())
if __name__ == "__main__":
app = QtGui.QApplication(sys.argv)
main = Main()
main.show()
sys.exit(app.exec_())
lanz, спасибо огромное!
2
В В В
Текстовая версия Сейчас: 22.1.2021, 12:00 |
Nous avons déjà utilisé un Arduino pour contrôler une cafetière avec des boutons, une télécommande infrarouge, un smartphone Android et même un Minitel. Maintenant, il est temps d’utiliser le MicroBit.
Matériel nécessaire
La cafetière modifiée et 1 module d’au moins 3 relais
Un MicroBit et son alimentation de 3V,
Une alimentation de 5V pour le module relais (piles ou transformateur)
Des connecteurs MicroBit – Dupont
Des câbles Dupont (et éventuellement une breadboard)
Schéma de branchement
Le schéma ressemble beaucoup à celui de l’Arduino. Mais pour faciliter les branchements, j’ai décidé d’utiliser seulement les 3 sorties principales du Micro:bit (P0, P1 et P2). L’alimentation électrique de la cafetière ne sera donc pas gérée par le module relais.
Attention : le Microbit fonctionne en 3,3V alors que le module relais demande une alimentation de 5V pour fonctionner. Vous devez donc utiliser 2 alimentations séparées. Ça ne pose pas vraiment de soucis à condition de relier ensembles les bornes (-), comme sur le schéma.
Vous pouvez aussi utiliser ce type de module qui se connecte directement à la breadboard et peut fournir simultanément 2 tensions différentes (3,3 et 5V), en déplaçant les cavaliers.
Le schéma légèrement différent.
Code
Le programme est très simple à utiliser. Vous mettez tous les éléments sous tension et vous appuyez sur le bouton A pour faire un café simple ou sur le bouton B si vous souhaitez un café double.
# Appel de la bibliothèque "microbit"
from microbit import *
# Les boutons ne sont pas appuyes
pin0.write_digital(1)
pin1.write_digital(1)
pin2.write_digital(1)
# Définition des temps de chauffe (en sec)
# Ajustez ces valeurs suivant votre cafetiere
CHAUFF_CAFFETIERE = 90
SIMPLE_TASSE = 70
DOUBLE_TASSE = 100
# Fonctions
def SimpleTasse():
# Affichage de la taille de la tasse
display.show("1")
# Allumage de la cafetiere
# (appui et relachement du bouton)
pin0.write_digital(0)
sleep(500)
pin0.write_digital(1)
# Chauffage de l'eau dans la cuve
sleep(CHAUFF_CAFFETIERE * 1000)
# Simple tasse (appui et relachement du bouton)
pin1.write_digital(0)
sleep(500)
pin1.write_digital(1)
# Préparation du cafe
sleep(SIMPLE_TASSE * 1000)
# Extinction de la cafetiere
# (appui et relachement du bouton)
pin0.write_digital(0)
sleep(500)
pin0.write_digital(1)
display.clear()
def DoubleTasse():
# Affichage de la taille de la tasse
display.show("2")
# Allumage de la cafetiere
# (appui et relachement du bouton)
pin0.write_digital(0)
sleep(500)
pin0.write_digital(1)
# Chauffage de l'eau dans la cuve
sleep(CHAUFF_CAFFETIERE * 1000)
# Double tasse (appui et relachement du bouton)
pin2.write_digital(0)
sleep(500)
pin2.write_digital(1)
# Préparation du cafe
sleep(DOUBLE_TASSE * 1000)
# Extinction de la cafetiere
# (appui et relachement du bouton)
pin0.write_digital(0)
sleep(500)
pin0.write_digital(1)
display.clear()
# Boucle infinie
while True:
if button_a.was_pressed():
# Café simple
SimpleTasse()
elif button_b.was_pressed():
# Café double
DoubleTasse()
Si vous possédez un autre Microbit, vous pouvez l’utiliser pour télécommander la cafetière à distance.
Télécommande
Le programme est très court, il envoie simplement le message « 1 » ou «
» pour indiquer que l’on a appuyé sur le bouton A ou sur le bouton B.2
# Appel de la bibliothèque "microbit"
from microbit import *
import radio
radio.config(group=1)
radio.on()
# Boucle infinie
while True:
if button_a.was_pressed():
# Café simple
radio.send('1')
elif button_b.was_pressed():
# Café double
radio.send('2')
Contrôle de la cafetière
Le code est presque identique à la 1ère version. Les boutons fonctionnent toujours, mais si le récepteur reçoit le message « 1 » ou «
« , il réagit comme si vous aviez appuyé sur l’un des boutons.2
# Appel de la bibliothèque "microbit"
from microbit import *
import radio
# Les boutons ne sont pas appuyes
pin0.write_digital(1)
pin1.write_digital(1)
pin2.write_digital(1)
# Définition des temps de chauffe (en sec)
# Ajustez ces valeurs suivant votre cafetiere
CHAUFF_CAFFETIERE = 90
SIMPLE_TASSE = 70
DOUBLE_TASSE = 100
# Fonctions
def SimpleTasse():
# Affichage de la taille de la tasse
display.show("1")
# Allumage de la cafetiere
# (appui et relachement du bouton)
pin0.write_digital(0)
sleep(500)
pin0.write_digital(1)
# Chauffage de l'eau dans la cuve
sleep(CHAUFF_CAFFETIERE * 1000)
# Simple tasse (appui et relachement du bouton)
pin1.write_digital(0)
sleep(500)
pin1.write_digital(1)
# Préparation du cafe
sleep(SIMPLE_TASSE * 1000)
# Extinction de la cafetiere
# (appui et relachement du bouton)
pin0.write_digital(0)
sleep(500)
pin0.write_digital(1)
display.clear()
def DoubleTasse():
# Affichage de la taille de la tasse
display.show("2")
# Allumage de la cafetiere
# (appui et relachement du bouton)
pin0.write_digital(0)
sleep(500)
pin0.write_digital(1)
# Chauffage de l'eau dans la cuve
sleep(CHAUFF_CAFFETIERE * 1000)
# Double tasse (appui et relachement du bouton)
pin2.write_digital(0)
sleep(500)
pin2.write_digital(1)
# Préparation du cafe
sleep(DOUBLE_TASSE * 1000)
# Extinction de la cafetiere
# (appui et relachement du bouton)
pin0.write_digital(0)
sleep(500)
pin0.write_digital(1)
display.clear()
# Programme
radio.config(group=1)
radio.on()
# Boucle infinie
while True:
# Réception des messages
message = radio.receive()
# interpretation du message
if message == '1':
# Café simple
SimpleTasse()
elif message == '2':
# Café double
DoubleTasse()
# Boutons
if button_a.was_pressed():
# Café simple
SimpleTasse()
elif button_b.was_pressed():
# Café double
DoubleTasse()
|
Вот уже три месяца как работаю на Макбуке Про. До этого сидел на Убунте/Минте, еще раньше – на Виндах. Покупал Мак скорее ради эксперимента: попробовать новую операционку, окружение. В целом, ощущения скорее положительные, хотя и не без претензий. Итак, ниже – мой список плюсов и минусов от использования яблочного ноутбука.
Эргономика и дизайн действительно на высоте. Прочный алюминиевый корпус, легкий, компактный, никаких люфтов, прогибов. (Однако, у коллеги скрипит экран при регулировке угла, полагаю, частный случай.)
Читать текст с ретины – удовольствие на уровне физического.
Производительность и стабильность системы в среднем выше аналогов. За три месяца повис только раз, падения программ случаются намного реже, чем в Убунте/Винде
Качество некоторых программ выше аналогов для других ОС, например, того же Скайпа. Причина – гайды для разработчиков.
Без проблем работают все игры Близзарда (я играю в Старкрафт, тянет графику выше средних).
Без проблем устанавливается весь *никсовый стек веб-разработчика – nginx, memcached, Python, Ruby, etc.
Не нужно думать о вирусах и тд, хотя игнорировать основы безопасности не стоит.
Очень живучая батарея, при обычном серфинге и ютубчике хватает часов на 6-7.
Офигительный трекпад: можно реально работать без мыши. Многие программы поддерживают жесты. Очень точный скроллинг и перетаскивание.
Не обошлось без минусов:
Другая раскладка. Мне, привыкшему за 15 лет к работе с PC-клавиатурой, оказалось адски трудно перейти на маковскую раскладку. И похоже, этого не произойдет. Руки привыкли к правому контролу, роль которого выполняет Комманд-кей, клавиша абсолютно лишняя. Видимо, Джобс полагал, что обычному человеку проще нажимать модификационную клавишу правым пальцем, а не мизинцем. Может, оно и так, но руки перетренировать я не в силах – подключаю внешнюю писишную клавиатуру.
Продолжение первого пункта – трудности с переключением раскладки. Решается установкой Панто Свитчера от Яндекса.
Есть огрехи в некоторых программах – например, предпросмотр вордовского документа работает хорошо, вижу текст. Открываю в pages – говорит, нет шрифта Калибри, не вижу текста. Что за идиот писал эту часть?
Неудобный файловый менеджер, который Finder. Идиотизм, что папки и файлы сортируются в одном порядке. В диалоге выбора файлов нельзя сделать никаких операций (например, переименовать файл).
Почти все программы не закрываются, а остаются висеть в памяти, приходится завершать их мышкой (правый клик – Quit).
Вот примерно такой списочек. Несмотря на первичную фрустрацию, покупкой Мака я доволен – хоть и стоит почти в два раза дороже аналогичной по производительности модели, по качеству и стабильности работы обгоняет конкурентов. Остается только привыкнуть к раскладке, и будет все ОК.
Комментарии из старого блога
11/25/14 Женя Волков:У меня был ровно такой же путь на мак, один в один с Убунтой и Минтом =)
В раскладке не нравится, что надо жать Command вместо Ctrl? Предлагаю сравнить эргономичность (посмотреть как раскорячивается рука) нажимая всякие комбинации с обеими этими клавишами. С Command практически не надо тянуться и целиться, клавиша всегда под большим пальцем (если правая рука стоит на asdf, если нет, то это не проблемы раскладки, конечно). Привычки — страшная сила, но это не повод не пользоваться удобным. Это моё субъективное мнение насчет раскладки, но мне кажется, многие его подтвердят.
Сейчас можно почти забыть о файндере, так как по Ctrl+пробел открывается спотлайт, который ищет всё и вся. Диалог выбора файлов не зря так называется =) В нём можно выбирать файлы.
Сортировка файлов небось по алфавиту?
Закрыть окно — Command+W Закрыть программу — Command+Q
И вообще, в маке очень много горячих клавиш и всяких скрытых фич, например Alt+ПКМ где-либо.
11/25/14 Иван Гришаев:Я не спорю, что все дело в привычке, но переучиваться некогда. Сколько себя помню, переключал раскладку по Контрол-Шифт. При разработке у вас куча файлов, спотлайт не поможет. Про сортировку я имел в виду то, что в винде/убунте сперва идут папки, а потом файлы. Я к этому привык и мне кажется это правильным. А диалог все-таки можно подтюнить.11/25/14 Женя Волков:Вопрос был со скрытой подсказкой. Если выбрать сортировку не по алфавиту, а по типу или программе, то папки отделятся от файлов по группам. Хотя я лично советую выбирать сортировку по дате изменения/добавления — так последние использованные файлы остаются вверху, старые опускаются вниз.
Если бы диалог выбора файлов оправдывал другие возможности, помимо выбора файлов, его бы давно «подтюнили». Политика Эпла с начала компании — избавляться от всего лишнего, в этом весь мак. Зато, в диалоге открытия файла можно пробелом посмотреть его по-быстрому, а потом открыть, если нужно. И, например, меня больше пугает, что в винде и линуксе (может не везде уже) в диалоге открытия(!) файла можно создать новую папку.
Я тоже всю жизнь переключал раскладки
Ctrl+Shift(что-то много совпадений в наших историях =). Любая привычка меняется за несколько дней, в случае с постоянным использованием компьютера в работе. А если поощрять себя за новую привычку, то ещё быстрее и безболезненней. А самый большой плюс в том, что приобретая новые привычки мы не забываем тут же старые, в мозгу образуются новые связи, а это всегда полезно.
Скролл тоже инвертировать пришлось обратно как везде?
11/26/14 Иван Гришаев:Сортировка по типу – это не то. Я привык к сортировке по расширению файла и чтобы папки были сверху. Насчет диалога – все равно не понятно, зачем лишать пользователя возможности что-то сделать. Не пойму, кто пострадает из-за того, что я удалил старый файл в окошке диалога. Скрол инвертирован, да. По долгу службы мне еще приходится работать с виртуальными машинами на линуксах, где старые клавиши и комбинации, и я тупо начинаю путаться между маковской схемой и писишной. Поэтому у меня сейчас некий стабилизационный период =)12/08/14 petu:Файлы в Finder можно переименовывать, если на файле (когда выделение находится на файле) нажать Enter :). Неочевидно, но работает.
Цитирую Ленту.ру:
Памятник Стиву Джобсу в виде гигантского iPhone, стоявший в Технопарке Научно-исследовательского университета информационных технологий в Санкт-Петербурге, был демонтирован после заявления главы Apple Тима Кука о своей нетрадиционной сексуальной ориентации, сообщает сайт радиостанции «Бизнес-FM».
Пресс-служба ЗЕФС объясняет демонтаж памятника двумя причинами: исполнением закона о запрете пропаганды гомосексуализма среди несовершеннолетних, а также разоблачениями бывшего сотрудника Агентства национальной безопасности США Эдварда Сноудена, согласно которым продукция Apple передает данные о своих пользователях американским спецслужбам.
Стоит только коснуться темы перевода времени, как сразу находятся те, кто утверждают, что разница с астрономическим временем – это плохо. “Разница с астрономическим временем составит аж два часа! Солнце будет в зените только в 14 часов!” – сокрушается типовой комментатор.
Уважаемые комментаторы, астрономическое время нам нахрен не всралось. Оно – не более чем очережная система измерения. То, что время, удобное человеку, опережает астрономическое на N часов, никого, кроме упоротых особей, не интересует. Так уж сложилось, что человек устанавливает меры и веса под свои нужды. Как нам удобно – так и должно быть.
Ноль градусов по Цельсию – температура замерзания воды. Высасано из пальца, зато удобно, ни с чем не спутаешь. В развернутом угле 180 градусов – столько раз умещается Солнце на небосводе. А дневное время – это когда светло. Все дела легче и безопасней делать при дневном свете. Поэтому время должно быть сдвинуто на столько часов, на сколько нам удобней. Сдвиг на 2 часа как раз подходил – нормально и вечером, и утром. Это единственное, что понимал Медведев, и сделал он все правильно. Теперь стало хуже. Почему – см. предыдущий пост.
Прокремлевская Лента.ру спешит напомнить нам, что текущие выборы в Украине (“на Украине”, на Ленте только так пишут) оказались худшими в истории по явке: 52.42% процента избирателей.
Ай-ай-ай, как все плохо в фашисткой Украине. Другое дело в Прекрасной России: выбирать мэра пришли аж целых 32.03% москвичей пруф.
взрослый человек на голубом глазу утверждает, что нам действительно нужно повышение налогов и санции, чтобы развивать российское ПО (читать с середины комментариев).
Россия вернулись на зимнее время. Вполне ожидаемо в стране, где правящей верхушке плевать на граждан. Теперь утром мы теряем один светлый час, зря его просыпая. А вечером будем проводить в темноте на один час больше. Вести детей из садика будем во мраке. Ученик со второй смены выйдет из школы уже в сумерках.
Словом, как всегда чиновники оказались мудаками.
Комментарии из старого блога
10/27/14 Алексей Дзюба:А я вот с вами не согласен. Прошлой зимой меня достало, когда в Санкт-Петербурге светлеет только после 10 утра.10/27/14 Иван Гришаев:Это ваши личные предпочтения. Я же даю обоснование: основная часть преступлений совершается именно вечером, а не утром. Если раньше ребенок мог вернуться домой до наступления темноты, то теперь он этой возможности лишен. После работы среднестатичный гражданин должен успеть забрать ребенка из сада/школы, зайти в магазин, сесть в общественный транспорт. Все это удобней делать, пока еще не стемнело.10/27/14 Сергей Кондрашев:Было: в Москве астрономическое время отставало на почти 2 часа (солнце в зените не в 12 дня, а в 13:45) Сейчас на час скорректировали. Это явно лучше для сельского хозяйства, где люди больше на световой день сориентированы. А в городе все позже вставали и ложились, поэтому прежнее время больше подходило. Так что все относительно - не все население нашей страны живет в городах.10/27/14 Иван Гришаев:Какая разница, когда солнце в зените? Вы регулярно подходите к окну и проверяете это? Сельское хозяйство явно ниже по приоритету, чем образование и безопасность жителей городов. Что мешает организовать работу деревенских фабрик по своему графику?10/27/14 Igor Schnaider:Сергей Кондрашев, астрономическое время - это такая хуета. Какая нафиг разница, во сколько по времени солнце в зените. Учитывая, что это и так абсолютно искусственно принятое время, а вот вечером идти по темноте - это стремно. Утром без разницы - светло или нет. Все равно на работе сидим.
В Кверифиде теперь нормальный вывод дат. Твиттер с своем АПИ выдает черт знает что:
Mon Sep 24 03:35:21 +0000 2012
Это и ни ИСО, и не RFC822. Приходится парсить и выводить в формате RFC822 вот так:
# app.py
@app.template_filter('rfc822')
def rfc822(str_date):
return DateParser.parse(str_date).strftime(
'%a, %d %b %Y %H:%M:%S %z')
# twitter.xml (jinja2 markup)
<pubDate></pubDate>
И твиты отображаются в верном порядке.
Лично бы расстрелял мудаков-дворников за то, что жгут листья.
Новая фича Кверифида – можно исключить из ленты прямые сообщения (начинающиеся с
@username). Мелочь, а приятно.
Неудобный, перегруженный интерфейс.
Километровые урлы.
По ссылке подтверждения дружбы открывается форма логина.
После авторизации – капча (ебать, 2014 год!)
Форма авторизации вымогает пароль не учтки, а самой электронной почты. |
After long hiatus (because $JOB) I’m trying to find some time to spend on FreeBSD-related projects, looking for small ones that can be done over weekend or a bit more. One of the ideas came from Ed Maste’s twitter: implement FreeBSD support for pyu2f. Since I already spent some time working on FreeBSD U2F support for Chromium it felt like a good small project.
The challenging part of the project was not U2F/HID but interfacing ioctl with Python, something I have never done before. It wasn’t super complex and I learned about Python’s ctype module.
Even more challenging though was to find a code to verify the implementation. Turned out there was no script to run
import os
from pyu2f import u2f
from pyu2f import model
ORIGIN = 'https://kernelnomicon.org'
APP_ID = 'wordpress'
REGISTRATION_DAT = 'registration.dat'
device = u2f.GetLocalU2FInterface(ORIGIN)
# Try to register new app or read saved registration data if it exists
if os.path.exists(REGISTRATION_DAT):
with open(REGISTRATION_DAT, 'rb') as f:
rd = f.read()
else:
r = device.Register(APP_ID, b'ABCD', [])
rd = r.registration_data
with open(REGISTRATION_DAT, 'wb+') as f:
f.write(rd)
# extract public key, key handle length, and key handle
pubkey = bytes(rd[1:66])
# this is for Python3, use ord(rd[66]) for Python2
khl = rd[66]
key_handle = bytes(rd[67:67 + khl])
# Try to authenticate
key = model.RegisteredKey(key_handle)
response = device.Authenticate(APP_ID, b'012345678', [key])
print (response.signature_data)
print (response.client_data)
The final result is in my fork of pyu2f repo, on freebsd branch. |
å¤è¿ç¨æè å¤çº¿ç¨çå¹¶è¡å éç®åå·²ç»ä¸æ¯ä»ä¹é¾äºäºï¼ç¸ä¿¡å¾å¤è¯»è é½ä½éªè¿ãä¸è¬æ¥è¯´ï¼æä»¬ä¼æè¿æ ·çç»è®ºï¼å¤è¿ç¨çå 鿝å¾é¾è¾¾å°1ãæ¢å¥è¯è¯´ï¼å½ä½ ç¨10è¿ç¨å»å¹¶è¡è·ä¸ä¸ªä»»å¡æ¶ï¼ä¸è¬åªè½è·å¾ä¸å°10åçå éï¼èä¸è¿ç¨è¶å¤ï¼è¿ä¸ªå 鿝å¾å¾å°±è¶ä½ã
è¦æ³¨æï¼æä»¬åæè¯´âå¾é¾è¾¾å°1âï¼è¯´ææä»¬çæ½æè¯éå°±è§å¾å 鿝æå¤ä¹å°±æ¯1ãç论ä¸ç¡®å®æ¯çï¼é¾ä¸æç¨10è¿ç¨è¿è½è·å¾20åçå éï¼è¿ä¸æ¯å¤©ä¸æé¦ 饼åï¼ä¸è¿æåå 天确å®ç¢°å°äºä¸ä¸ªå 鿝è¿å¤§äº1çä¾åï¼æä»¥å¨è¿éè·å¤§å®¶å享ä¸ä¸ã
è¯é¢ç»è®¡ #
æçåå§ä»»å¡æ¯ç»è®¡è¯é¢ï¼ææå¾å¤æç« ï¼ç¶åæä»¬è¦å¯¹è¿äºæç« è¿è¡åè¯ï¼æåæ±æ»åºä¸ä¸ªè¯é¢è¡¨åºæ¥ãä¸è¬çåæ³æ¯è¿æ ·çï¼
tokens = {}
for text in read_texts():
for token in tokenize(text):
tokens[token] = tokens.get(token, 0) + 1
è¿ç§åæ³å¨æç»è®¡THUCNewså ¨é¨æç« çè¯é¢æ¶ï¼å¤§æ¦è±äº20åéã
å¤è¿ç¨çæ¬ #
ç¶åï¼æä»¬æ¥æ¯è¾ä¸ä¸å¤è¿ç¨ççãå¤è¿ç¨çåæ³æå¨ãPythonçå¤è¿ç¨ç¼ç¨æå·§ã䏿已ç»ä»ç»è¿äºï¼ä¸ºäºæ¹ä¾¿éå¤ä½¿ç¨ï¼æå°±å°å ¶å°è£ 为ä¸ä¸ªå½æ°äºï¼
def parallel_apply(func,
iterable,
workers,
max_queue_size,
callback=None,
dummy=False):
"""多进程或多线程地将func应用到iterableçš„æ¯ä¸ªå…ƒç´ ä¸ã€‚
注æ„这个apply是异æ¥ä¸”æ— åºçš„ï¼Œä¹Ÿå°±æ˜¯è¯´ä¾æ¬¡è¾“å…¥a,b,c,但是
输出å¯èƒ½æ˜¯func(c), func(a), func(b)。
傿•°ï¼š
dummy: False是多进程/线性,True则是多线程/线性;
callback: 处ç†å•个输出的回调函数;
"""
if dummy:
from multiprocessing.dummy import Pool, Queue
else:
from multiprocessing import Pool, Queue
from six.moves import queue
in_queue, out_queue = Queue(max_queue_size), Queue()
def worker_step(in_queue, out_queue):
# 啿¥å‡½æ•°åŒ…装æˆå¾ªçŽ¯æ‰§è¡Œ
while True:
d = in_queue.get()
r = func(d)
out_queue.put(r)
# å¯åŠ¨å¤šè¿›ç¨‹/线程
pool = Pool(workers, worker_step, (in_queue, out_queue))
if callback is None:
results = []
# åŽå¤„ç†å‡½æ•°
def process_out_queue():
out_count = 0
for _ in range(out_queue.qsize()):
d = out_queue.get()
out_count += 1
if callback is None:
results.append(d)
else:
callback(d)
return out_count
# å˜å…¥æ•°æ®ï¼Œå–出结果
in_count, out_count = 0, 0
for d in iterable:
in_count += 1
while True:
try:
in_queue.put(d, block=False)
break
except queue.Full:
out_count += process_out_queue()
if in_count % max_queue_size == 0:
out_count += process_out_queue()
while out_count != in_count:
out_count += process_out_queue()
pool.terminate()
if callback is None:
return results
è°ç¨è¿ä¸ªå½æ°æ¥å¤è¿ç¨ç»è®¡è¯é¢ï¼å¤§è´ä»£ç å¦ä¸ï¼
def _batch_texts():
texts = []
for text in read_texts():
texts.append(text)
if len(texts) == 1000:
yield texts
texts = []
if texts:
yield texts
def _tokenize_and_count(texts):
tokens = {}
for text in texts:
for token in tokenize(text):
tokens[token] = tokens.get(token, 0) + 1
return tokens
tokens = {}
def _total_count(result):
for k, v in result.items()
tokens[k] = tokens.get(k, 0) + v
# 10进程æ¥å®Œæˆè¯é¢‘统计
parallel_apply(
func=_tokenize_and_count,
iterable=_batch_texts(),
workers=10,
max_queue_size=200,
callback=_total_count,
)
æ´ä¸ªæµç¨æ¯ï¼_batch_textså°ææ¬ææ¹ååï¼æ¯æ¹ä¸º1000ä¸ªææ¬ï¼_tokenize_and_countç¨æ¥å¯¹æ¯ä¸æ¹æ ·æ¬è¿è¡ç»è®¡ï¼_total_count对æ¯ä¸æ¹æ ·æ¬çç»æè¿è¡æ±æ»ï¼æåparallel_applyç¨10è¿ç¨å®ç°è¿ä¸ªè¿ç¨ã
è¿ä¸ªç¨æ¶å¤å°å¢ï¼ç»ææ¯55ç§ï¼è¿æå³çå é20åï¼å 鿝æ¯2ï¼
åçåæ #
为ä»ä¹è½å®ç°å¤§äº1çå 鿝å¢ï¼å
¶å®ï¼åå å¨äºæå¼å§çåè¿ç¨å®ç°ä¸ï¼tokens[token] = tokens.get(token, 0) + 1ä¸å¥ä¼è¶æ¥è¶æ
¢ï¼å 为éçç»è®¡çæ¨è¿ï¼tokenséè¾¹çå
ç´ è¶æ¥è¶å¤ï¼å¯¹tokensçå¢å æ¹æ¥å°±ä¼è¶æ¥è¶æ
¢ã
èå¨å¤è¿ç¨çæ¬ä¸ï¼tokens[token] = tokens.get(token, 0) + 1ä¸å¥åªå¯¹ä¸è¶
è¿1000ä¸ªæ ·æ¬æ§è¡ï¼æ¾ç¶ä¼ä¸ç´ä¿æå¾å¿«çéåº¦ï¼æåçåå¹¶ç»è®¡ç»æè½ç¶å¯¹tokensç读åä¹å¾é¢ç¹ï¼ä½è¿æ¯ä¸ä¸åå§å®ç°ç读åé¢çï¼å æ¤ä¹æ¯å¾å¿«çãæä»¥å¤è¿ç¨çæ¬å°±å¯ä»¥å®ç°20åçå éï¼èä¸ä»
ä»
æ¯ç论ä¸çæé10åã
å½ç¶ï¼è¯»è å¯è½å·²ç»æè§å°ï¼è¿å¹¶ä¸æ¯çæ£å°è®©å éæ¯è¶ è¿äº1ï¼èæ¯åå§çåè¿ç¨çåå¾ä¸å¥½çè¡¨è±¡ï¼æ¢æä¸è¿°ä»£ç 就好äºï¼
count = 0
tokens = {}
_tokens = {}
for text in read_texts():
for token in tokenize(text):
_tokens[token] = _tokens.get(token, 0) + 1
count += 1
if count == 1000:
for k, v in _tokens.items():
tokens[k] = tokens.get(k, 0) + v
count = 0
_tokens = {}
for k, v in _tokens.items():
tokens[k] = tokens.get(k, 0) + v
ä¹å°±è¿æ¯åæ¹ç»è®¡åæ±æ»çåæ³ï¼åªä¸è¿è¿æ¯åè¿ç¨çï¼çä¸å»è¿ç§åæ³å¾è¿åï¼å¾ä¸ç´è§ï¼ä½äºå®ä¸åªç¨äº8åéï¼å¤§çº¦åªæ¯åæ¥çæ¬çä¸åä¹ä¸ï¼ç±æ¤å¯è§ï¼å®é ä¸çå éæ¯å¤§çº¦æ¯0.8ã
æ¬æå°ç» #
ææ¬ç®å讨论äºä¸ä¸Pythonçå¤è¿ç¨é®é¢ï¼ç»åºäºä¸ä¸ªçä¸å»å 鿝å¯ä»¥å¤§äº1çä¾åï¼ç¶ååæäºå ¶åå ãä»ä¾§é¢æ¥çï¼è¿å ¶å®ä¹ç»æä»¬å类似çä»£ç æäºä¸ªéï¼åªæå¨åè¿ç¨çæ åµä¸ï¼åæ¹è®¡ç®ç¶å卿±æ»çæçï¼ä¹é常ä¼é«äºä¸æ´æ¹ä¸æ¬¡æ§è®¡ç®ã
转载å°è¯·å
æ¬æ¬æå°åï¼https://spaces.ac.cn/archives/7031
æ´è¯¦ç»ç转载äºå®è¯·åèï¼ãç§å¦ç©ºé´FAQã
妿æ¨è§å¾æ¬æè¿ä¸éï¼æ¬¢è¿å享/æèµæ¬æãæèµå¹¶éè¦ä»ä¸è·å¾æ¶çï¼èæ¯å¸æç¥éç§å¦ç©ºé´è·å¾äºå¤å°è¯»è
ççå¿å
³æ³¨ãå½ç¶ï¼å¦æä½ æ è§å®ï¼ä¹ä¸ä¼å½±åä½ çé
读ã忬¡è¡¨ç¤ºæ¬¢è¿åæè°¢ï¼
妿æ¨éè¦å¼ç¨æ¬æï¼è¯·åèï¼
èåæ. (Oct. 27, 2019). ãä»ä¹æ¶åå¤è¿ç¨çå 鿝å¯ä»¥å¤§äº1ï¼ ã[Blog post]. Retrieved from https://spaces.ac.cn/archives/7031
ä½ ä¹è®¸è¿å¯¹ä¸é¢çå 容æå ´è¶£
6个派çä¼åå¨çç®åä»ç»åå ¶å®ç°
åºäºæå°çµåççNLPåºï¼nlp zero
ç¨Numpyå®ç°é«æçAprioriç®æ³
å¢å¼ºtypechoçæç´¢åè½
Pythonçå¤è¿ç¨ç¼ç¨æå·§
ãå¤å¿ãPython䏿å¤é循ç¯çå ç§æè·¯
åºäºååGRUåè¯è¨æ¨¡åçè§è§æ æåæ
ä¸é¡¾ç¢çº¸å¤åï¼åºäºCNNçç¢çº¸å¤å
ãçè§£é»æ¼å ä½ã6. æ²çç计æ°ä¸è®¡ç®(Python)
åºäºååLSTMåè¿ç§»å¦ä¹ çseq2seqæ ¸å¿å®ä½è¯å« |
想要观察阿里云上所有实例每周的cpu 平均使用率情况,根据阿里OpenAPI 提供的 " DescribeInstanceMonitorData " api来获取实例所有监控数据,之后再处理数据得到自己想要的指标。
OpenAPI 网址: https://api.aliyun.com
使用 python3
需要安装阿里OpenAPI SDK: python3 -m pip install aliyun-python-sdk-ecs
官方提供的 DescribeInstanceMonitorData python 版代码:
#!/usr/bin/env python
#coding=utf-8
from aliyunsdkcore.client import AcsClient
from aliyunsdkcore.acs_exception.exceptions import ClientException
from aliyunsdkcore.acs_exception.exceptions import ServerException
from aliyunsdkecs.request.v20140526.DescribeInstanceMonitorDataRequest import DescribeInstanceMonitorDataRequest
client = AcsClient('<accessKeyId>', '<accessSecret>', 'cn-hangzhou')
request = DescribeInstanceMonitorDataRequest()
request.set_accept_format('json')
request.set_InstanceId("InstanceId")
request.set_StartTime("2020-04-09T00:00:00Z")
request.set_EndTime("2020-04-09T12:00:00Z")
request.set_Period(600)
response = client.do_action_with_exception(request)
# python2: print(response)
print(str(response, encoding='utf-8'))
修改成适合自己使用的:
#!/usr/bin/env python
# coding=utf-8
import os
import json
from datetime import datetime, date
from datetime import timedelta
from aliyunsdkcore.client import AcsClient
from aliyunsdkcore.acs_exception.exceptions import ClientException
from aliyunsdkcore.acs_exception.exceptions import ServerException
from aliyunsdkecs.request.v20140526.DescribeInstanceMonitorDataRequest import DescribeInstanceMonitorDataRequest
access_key_id = "" # 填入 accessKeyId, 没有的话去控制台新建
access_secret = "" # 填入 accessSecret
region_id = "" # 填入 RegionId 如:cn-hangzhou
client = AcsClient( access_key_id, access_secret, region_id)
instance_id = [] # 在此填入所有阿里实例ID,如:instance_id = ["i-bp1db2tn3cvfxxxxxxxx", "i-bp1auiw4jkbbxxxxxxxx"] 所有实例ID可由另一个api "DescribeInstances" 获得
request = DescribeInstanceMonitorDataRequest()
request.set_accept_format('json')
# 获取 n 天前时间,并格式化为api指定格式时间戳
def get_date(days=0):
return (datetime.now() - timedelta(days=days)).strftime("%Y-%m-%dT%H:%M:00Z")
# 设置获取起始时间点(阿里api限制,故每次获取一天数据)并获取数据
def do_request(days=0, ins_id=""):
start_time = get_date(days) # 起始时间
end_time = get_date(days - 1) # 截止时间
request.set_InstanceId(ins_id)
request.set_StartTime(start_time)
request.set_EndTime(end_time)
request.set_Period(600) # 获取监控数据的间隔时间,设置为10分钟,阿里每次最多返回400条监控数据
res = client.do_action_with_exception(request)
return res
# 格式化json输出
def get_pretty_print(res):
dat = json.loads(res)
js = json.dumps(dat, sort_keys=True, indent=4, separators=(',', ': '), ensure_ascii=False)
return js
def main():
# 指定并新建存储目录
dir_name = str(input('please input the dir name to save monitor data. eg: 2020-04-09: ')) # 要新建的目录名
new_path = '/data/monitor_data/' + dir_name # 指定存储位置,windows 下如:'d:/monitor_data/' + dir_name
os.mkdir(new_path)
for insId in instance_id: # 获取所有实例数据
path = new_path + '/' + insId
for day in range(7, 0, -1): # 获取一周数据
response = do_request(day, insId)
output = get_pretty_print(response)
with open(path, 'a+') as f:
f.write(output) # 以实例id为文件名写入数据
print('%s done' % insId)
print()
print()
print('all info saved to %s' % new_path)
if __name__ == '__main__':
main()
使用以上脚本可获取所有实例1周的监控数据,以json格式存储。
接下来使用shell 脚本处理文本获取cpu 数据并统计。
#!/bin/bash
#
#批量获取实例cpu平均负载
path="/data/monitor_data/xxxx" # 填入上面设置的数据存储目录路径
for InstanceId in `ls -l $path |grep -v total|awk '{print$9}'` # 获取目录下所有文件名,即实例ID,并循环处理计算cpu平均负载
do
cat $path/$InstanceId |grep CPU|sed "s/\"//g"|sed "s/,//g"|awk '{print$2}' > /tmp/cpu # 分离出所有cpu占用率值
count=`wc -l /tmp/cpu |awk '{print $1}'`
sum=0
for cpuUse in `cat /tmp/cpu`
do
let sum+=cpuUse
done
cpuAvg=$((sum/count))
echo "InstanceId: $InstanceId cpu_average: $cpuAvg" | tee -a /tmp/cpuAvg
done
echo ""
echo ""
echo "all done and saved the result to /tmp/cpuAvg"
再排序一下即可获得cpu一周平均负载降序:
需要观察其他指标可自行处理监控数据项。 |
Two Key Concepts: Consistency and Satisfiability
The last structural step in a build is to test the knowledge graph for logic, the topic of today’s Cooking with Python and KBpedia installment. We first introduced the concepts of consistency and satisfiability in CWPK #26. Axioms are assertions in an ontology, as informed by its base language; that is, the aggregate of the triple statements in a knowledge graph. Consistency is where no stated axiom entails a contradiction, either in semantic or syntactic terms. A consistent knowledge graph is one where its model has an interpretation under which all formulas in the theory are true. Satisfiability means that it is possible to find an interpretation (model) that makes the axiom true.
Satisfiability is a test of classes to discover if there is an interpretation that is non-empty. This is tested against all of the logical axioms in the current knowledge graph, most effectively driven by disjoint and functional assertions. Consistency is an ontology measure to test whether there is a model that meets all axioms. I often use the term incoherent to refer to an ontology that has unsatisfiable assertions.
The Sattler, Stevens, and Lord reference shown under the first link under Additional Documentation below offers this helpful shorthand:
Unsatisfiable: How ever hard you try, you will never find an individual which fits an unsatisfiable concept
Incoherent: Sooner or later, you are going to contradict yourself, and
Inconsistent: At least, one of the things you have said makes no sense.
In the Protégé IDE, unsatisfiable classes are shown in red in the inferred class hierarchy, and makes them subclasses of Nothing, meaning they have no instances, ever. If the ontology is inconsistent, it is indicated by a new window warning about the inconsistency and offering guidance of how to fix.
The two reasoners available to us, via either owlready2 or Protégé, are HermiT and Pellet. Hermit is better at identifying inconsistencies, while Pellet is better at identifying unsatisfiable classes. We will use both in our structural logic tests.
However, before we get into those logic topics, we need to load up our system with our new start-up routines.
Our New Startup Sequence
As we discussed in the last installment, we no longer will post the specific start-up steps. At the same time that we are moving our prior functions into modules, discussed next, we have moved those steps to the cowpoke package proper. Here is our new start-up instruction:
from cowpoke.__main__ import *
from cowpoke.config import *
Please review your configuration settings in config.py to make sure you are using the appropriate input files and you know where to write out results. Assuming you have just finished your initial structural build steps, as discussed in the past few installments, you should likely be using the kb_src = 'standard' setting.
Summary of the Added Modules
Here are the steps we took to add the two new modules of build and utils to the cowpoke package:
Added these import statements to __init__.py:
from cowpoke.build import *
from cowpoke.utils import *
Added what had been our standard start-up expressions to
__main__.py
Created two new files using Spyder for the
cowpokeproject,build.pyandutils.py, and added our standard file header to them
Moved the various functions defined in recent installments into their appropriate new file, and ensured each was added in appropriate format to define a function
def
Tested the routines and made sure all functions were now appropriately disclosed and operational.
row_clean– a helper function to shorten resource IRI strings to internal formats
class_struct_builder– the function to process class input files into KBpedia’s internal representation
property_struct_builder– the function to process property input files into KBpedia’s internal representation.
dup_remover– a function to remove duplicate rows in input files
set_union– a function to determine the union between two or more class input files
set_difference– a function to determie the difference between two (or more, though not recommended) class input files
set_intersection– a function to determine the intersection between two or more class input files
typol_intersects– a comprehensive function that calculates the pairwise intersection among all KBpedia typologies
disjoint_status– a function to extract the disjoint assertions from KBpedia
branch_orphan_check– a function to identify classes that are not properly connected with the KBpedia structure
dups_parental_chain– a helper function to identify classes that have more than one direct superclass assignment across the KBpedia structure, used to inform how to reduce redundant class hierarchy declarations.
Logic Testing of the Structure
Prior to logic testing, I suggest you review CWPK #26 again for useful background information. You may also want to refer to the sources listed below under Additional Documentation.
Use of owlready2
While it is true that owlready2 embeds basic logic calls to either the HermiT and Pellet reasoners, the amount of information forthcoming from these tools is likely insufficient to meet the needs of your logic tests. First, let’s invoke the Hermit reasoner, calling up our kb ontology:
sync_reasoner(kb)
Unfortunately, with our set-up as is, HermiT errors out on us. This is because the reasoner will not accept a file address for our imported KKO upper ontology. We could change that reference in our stored knowledge graph, but we will skip for now since we can obtain similar information from the Pellet reasoner.
So, we invoke the Pellet alternative (note the analysis will take about three or so minutes to run):
sync_reasoner_pellet(kb)
For test purposes, I had temporarily assigned JaguarCat as a subclass of JaguarVehicle, which is a common assignment error where a name might refer to two different things, in this case animals and automobiles, that are disjoint. As we noted above, this subclass assignment violates our disjoint assertions and thus is shown under the owl.Nothing category.
If we add the temporary file switch to this call, however, we will write this information to the temporary file shown in the listing, plus more importantly get some traceback on where the problem may be occurring. This is the most detailed message available:
sync_reasoner_pellet(kb, keep_tmp_file=1)
* Owlready2 * Running Pellet...
java -Xmx2000M -cp C:\1-PythonProjects\Python\lib\site-packages\owlready2\pellet\antlr-3.2.jar;C:\1-PythonProjects\Python\lib\site-packages\owlready2\pellet\antlr-runtime-3.2.jar;C:\1-PythonProjects\Python\lib\site-packages\owlready2\pellet\aterm-java-1.6.jar;C:\1-PythonProjects\Python\lib\site-packages\owlready2\pellet\commons-codec-1.6.jar;C:\1-PythonProjects\Python\lib\site-packages\owlready2\pellet\httpclient-4.2.3.jar;C:\1-PythonProjects\Python\lib\site-packages\owlready2\pellet\httpcore-4.2.2.jar;C:\1-PythonProjects\Python\lib\site-packages\owlready2\pellet\jcl-over-slf4j-1.6.4.jar;C:\1-PythonProjects\Python\lib\site-packages\owlready2\pellet\jena-arq-2.10.0.jar;C:\1-PythonProjects\Python\lib\site-packages\owlready2\pellet\jena-core-2.10.0.jar;C:\1-PythonProjects\Python\lib\site-packages\owlready2\pellet\jena-iri-0.9.5.jar;C:\1-PythonProjects\Python\lib\site-packages\owlready2\pellet\jena-tdb-0.10.0.jar;C:\1-PythonProjects\Python\lib\site-packages\owlready2\pellet\jgrapht-jdk1.5.jar;C:\1-PythonProjects\Python\lib\site-packages\owlready2\pellet\log4j-1.2.16.jar;C:\1-PythonProjects\Python\lib\site-packages\owlready2\pellet\owlapi-distribution-3.4.3-bin.jar;C:\1-PythonProjects\Python\lib\site-packages\owlready2\pellet\pellet-2.3.1.jar;C:\1-PythonProjects\Python\lib\site-packages\owlready2\pellet\slf4j-api-1.6.4.jar;C:\1-PythonProjects\Python\lib\site-packages\owlready2\pellet\slf4j-log4j12-1.6.4.jar;C:\1-PythonProjects\Python\lib\site-packages\owlready2\pellet\xercesImpl-2.10.0.jar;C:\1-PythonProjects\Python\lib\site-packages\owlready2\pellet\xml-apis-1.4.01.jar pellet.Pellet realize --loader Jena --input-format N-Triples --ignore-imports C:\Users\mike\AppData\Local\Temp\tmpp4n32vj4
* Owlready2 * Pellet took 187.1356818675995 seconds
* Owlready * Equivalenting: kko.Generals kko.SuperTypes
* Owlready * Equivalenting: kko.SuperTypes kko.Generals
* Owlready * Equivalenting: rc.JaguarCat rc.JaguarVehicle
* Owlready * Equivalenting: rc.JaguarCat owl.Nothing
* Owlready * Equivalenting: rc.JaguarVehicle rc.JaguarCat
* Owlready * Equivalenting: rc.JaguarVehicle owl.Nothing
* Owlready * Equivalenting: owl.Nothing rc.JaguarCat
* Owlready * Equivalenting: owl.Nothing rc.JaguarVehicle
* Owlready * Reparenting rc.BiologicalLivingObject: {rc.FiniteSpatialThing, rc.OrganicMaterial, rc.NaturalTangibleStuff, rc.BiologicalMatter, rc.TemporallyContinuousThing} => {rc.BiologicalMatter, rc.FiniteSpatialThing, rc.OrganicMaterial, rc.TemporallyContinuousThing}
* Owlready * Reparenting rc.Animal: {rc.PerceptualAgent-Embodied, rc.AnimalBLO, rc.Organism, rc.Heterotroph} => {rc.PerceptualAgent-Embodied, rc.AnimalBLO, rc.Heterotroph}
* Owlready * Reparenting rc.Vertebrate: {rc.SentientAnimal, rc.MulticellularOrganism, rc.ChordataPhylum} => {rc.SentientAnimal, rc.ChordataPhylum}
* Owlready * Reparenting rc.SolidTangibleThing: {rc.ContainerIndependentShapedThing, rc.FiniteSpatialThing} => {rc.ContainerIndependentShapedThing}
* Owlready * Reparenting rc.Automobile: {rc.SinglePurposeDevice, rc.PassengerMotorVehicle, rc.WheeledTransportationDevice, rc.RoadVehicle, rc.TransportationDevice} => {rc.SinglePurposeDevice, rc.PassengerMotorVehicle, rc.RoadVehicle, rc.WheeledTransportationDevice}
* Owlready * Reparenting rc.AutomobileTypeByBrand: {rc.Automobile, rc.FacetInstanceCollection, rc.VehiclesByBrand} => {rc.Automobile, rc.VehiclesByBrand}
* Owlready * Reparenting rc.DeviceTypeByFunction: {rc.FacetInstanceCollection, rc.PhysicalDevice} => {rc.PhysicalDevice}
* Owlready * Reparenting rc.TransportationDevice: {rc.Conveyance, rc.HumanlyOccupiedSpatialObject, rc.Equipment, rc.DeviceTypeByFunction} => {rc.Conveyance, rc.HumanlyOccupiedSpatialObject, rc.Equipment}
* Owlready * Reparenting rc.LandTransportationDevice: {rc.TransportationProduct, rc.TransportationDevice} => {rc.TransportationDevice}
* Owlready * Reparenting rc.DeviceTypeByPowerSource: {rc.FacetInstanceCollection, rc.PhysicalDevice} => {rc.PhysicalDevice}
* Owlready * (NB: only changes on entities loaded in Python are shown, other changes are done but not listed)
Notice this longer version (as it true for the logs written to file) also flags some of our cyclical references.
Once the run completes, we can also call up the two classes (in this instance, not for what you have locally) that are unsatisfied:
list(kb.inconsistent_classes())
[rc.JaguarCat, owl.Nothing, rc.JaguarVehicle]
Use of owlready2’s reasoners also enables a couple of additional methods that can be helpful, especially in cases such as the analysis of parental chains that we undertook last installment. Here are two additional calls that are useful:
kb.get_parents_of(rc.Automobile)
[rc.PassengerMotorVehicle,
rc.RoadVehicle,
rc.SinglePurposeDevice,
rc.TransportationDevice,
rc.WheeledTransportationDevice]
kb.get_children_of(rc.Automobile)
[rc.HondaCar,
rc.LuxuryCar,
rc.AlfaRomeoCar,
rc.Automobile-GasolineEngine,
rc.AutomobileTypeByBrand,
rc.GermanCar,
rc.AutoSteeringSystemType,
rc.AutomobileTypeByBodyStyle,
rc.AutomobileTypeByConventionalSizeClassification,
rc.AutomobileTypeByModel,
rc.AutonomousCar,
rc.GMAutomobile,
rc.DemonstrationCar,
rc.ElectricCar,
rc.JapaneseCar,
rc.HumberCar,
rc.SaabCar,
rc.NashCar,
rc.NewCar,
rc.OffRoadAutomobile,
rc.PoliceCar,
rc.RentalCar,
rc.UsedAutomobile,
rc.VauxhallCar]
You can also invoke data or property value tests with Pellet, including or not debugging:
sync_reasoner_pellet(infer_property_values=True, debug=1)
sync_reasoner_pellet(infer_property_values=True, infer_data_property_values=True)
It is clear that reasoner support in owlready2 is a dynamic thing, with more capabilities being added periodically to new releases. At this juncture, however, for our purposes, we’d like to have a bit more capability and explanation tracing as we complete our structure logic tests. For these purposes, let’s switch to Protégé.
Reasoning with Protégé
At this point, I think using Protégé directly is the better choice for concerted logic testing. To do so, you will likely need to take two steps:
Using the File → Check for plugins … option in Protégé, make sure that Pellet is checked and installed on your system
Offline, increase the memory allocated to Protégé to up to 80% of your free memory. The settings are found in the first lines of either run.batorProtege.l4j.ini(remember, this series is based on Windows 10) in your Protégé startup directory. The two values areXms6000MandXmx6000M(showing my own increased settings for a machine with 16 GB of RAM); you may need to do an online search if you want to understand these settings better.
Then, to operate your reasoners once you have started up and loaded KBpedia (or your current knowledge graph) with Protégé, go to Reasoner (1) on the main menu, then pick your reasoner at the bottom of that menu. In this case, we are starting up with HermiT (2):
Truth is, I have tended to work more with Pellet over the years. My impression is that HermiT is largely consistent with what I have seen in Pellet, and HermiT does load in Protégé with the file assignment of KKO that was not accepted by owlready2.
So, on that basis, I log off and re-load and now choose the Pellet option. When we Reasoner → Start reasoner, and then after loading, go to the classes tab and then pick the Class hierarchy (inferred) (1) (note the yellow background and red text), we see the two temporary assignments now showing under owl:Nothing (2):
In the case of an ‘inconsistent ontology’ a more detailed screen appears (not shown, since we have not rigged KBpedia to display such) that helps track back the possible causes.
Our own internal build routines with Clojure and the OWLAPI has a more detailed output and better tracing of possible unsatisfiable issues. I have not provided such routines in this CWPK series because, it is not absolutely necessary for our ‘roundtripping‘ objectives, and to accomplish such in Python is likely way beyond my limited programming skills. This general area of decomposing structural builds from a logical perspective remains a pretty weak one with available tools.
OOPS! Scanner
Another very useful utility for checking possible problems is the OOPS! (OntOlogy Pitfall Scanner) online tool. You may copy your ontology to its online form (not recommended for something the size of KBpedia) or point the tool to a URI where you have stored the file. If you are using the utility frequently, there is also a REST API to the system.
It presently provides 33 pitfall tests in areas such as structure, function, usability, consistency, and completeness. OOPS! classifies pitfalls it finds into minor, important or critical designations:
OOPS! will catch issues that you would never identify on your own. Of course, you are not obligated to fix any of the issues, but some will likely seem appropriate. It is probably a good idea to run your knowledge graph against OOPS! at least once each major development cycle.
Some Logic Fix Guidelines
Of course, there may be many logic issues that arise in a knowledge graph. However, since we have largely restricted our scope to structure integrity and disjointedness, here are some general points drawn from experience of how to interpret and correct these kinds of issues.
An
owl.Nothingassignment with KBpedia likely is due to a misassigned disjoint assertion, since there has been much testing in this area
The first and likeliest fix is to remove the offending disjoint assertion
If there are multiple overlaps, look to the higher tier concepts, since they may be causative for a cascade of classes below them
A large number of overlaps, with some diversity among them, perhaps indicates a wrong disjoint assertion between typologies
To reclaim what intuitively (or abductively) feels like what should be a disjoint assertion between two typologies, consider cleaving one of the two typologies to better segregate the perceived distinctions
Some conflicts may be resolved by moving the offending concept higher in the hierarchy, since more general typologies have fewer disjoint assertions
Manually drawing Venn diagrams is one technique for helping to think through interactions and overlaps
When introducing a new typology, or somehow shifting or re-organizing others, try to take only incremental steps. Very large structure changes are hard to diagnose and tease out; it seems to require fewer iterations to get to a clean build by taking more and smaller steps
Assign
domainandrangeto allobjectPropertiesanddataProperties, but also be relaxed in the assignments to account for the diversity of data characterizations in the wild. As perhaps cleaning or vetting routines get added, these assignments may be tightened
Ultimately, all such choices are ones of design, understandability, and defensibility. In difficult or edge cases, it is often necessary to study and learn more, and sometimes re-do boundaries of offending concepts in order to segregate the problem areas.
This material completes the structure build portions of our present cycle. We can next turn our attention to loading up the annotations in our knowledge graph to complete the build cycle.
Additional Documentation
Here are some supplementary references that may help to explain these concepts further: |
Tengo dos modelos:
class Account(models.Model): ... class Transaction(models.Model): .... account = models.ForeignKey(Account) source_account = models.ForeignKey(Account, null=True)
Necesito mostrar el número de transacciones para cada una de las cuentas de un usuario. La annotate de Django parecía la herramienta adecuada para esta tarea. Yo si:
queryset = models.Account.objects.filter(user=self.request.user) queryset.annotate(transactions_count=Count('transaction'))
Esto da el número correcto para transacciones con el campo de account establecido en la cuenta predicada, pero deja de lado las transacciones donde source_account se establece en la cuenta predicada.
Usando el shell de Django puedo hacer algo como:
accounts_count = user_transactions.filter(Q(account=account)|Q(source_account=account)).count()
Esto da la respuesta correcta. ¿Hay algo que estoy haciendo mal? ¿Puede alguien apuntarme en la dirección correcta? Cualquier asistencia es muy apreciada.
Yo establecería related_name a sus campos ForeignKey . Entonces es un poco más fácil trabajar con ellos. Así, por ejemplo, en tus modelos vamos a configurar:
class Transaction(models.Model): ... account = models.ForeignKey(Account, related_name='transactions') source_account = models.ForeignKey(Account, null=True, related_name='source_transactions')
entonces puede hacer algo como:
queryset = models.Account.objects.filter(user=self.request.user).annotate(transactions_count=(Count('transactions')+Count('source_transactions'))
Funcionaría sin el nombre también, es más fácil de leer y más fácil. El punto principal es agregar las dos Count como un campo en la annotate .
El mejor enfoque para este tipo de problemas es imaginarlos en SQL en bruto y luego intentar imitarlo en Django ORM. (en el sql sin formato, también simplemente agregaría dos columnas como SELECT (a.col + a.col2) AS count
El problema es que su transacción tiene que forgeinKeys to Account. Yo sugeriría intentar algo como esto
class Transaction(models.Model): .... account = models.ForeignKey(Account, related_name="transaction_account") source_account = models.ForeignKey(Account, null=True, related_name="transaction_source_account")
Luego en su consulta:
queryset.annotate(transactions_count=((Count('transaction_account') + Count('transaction_source_account'))
|
blob: 7419d2d3399f52a093ad4ace2e9c089de0ea0949 (
plain
)
# Copyright (C) 2006, Red Hat, Inc.
# Copyright (C) 2007, One Laptop Per Child
# Copyright (C) 2009, Tomeu Vizoso, Simon Schampijer
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
import os
import time
import re
from gettext import gettext as _
from gi.repository import GObject
from gi.repository import Gtk
from gi.repository import Gdk
from gi.repository import Pango
from gi.repository import WebKit
from gi.repository import Soup
from sugar3 import env
from sugar3.activity import activity
from sugar3.graphics import style
from sugar3.graphics.icon import Icon
from widgets import BrowserNotebook
from palettes import ContentInvoker
from filepicker import FilePicker
import globalhistory
import downloadmanager
from pdfviewer import PDFTabPage
ZOOM_ORIGINAL = 1.0
_ZOOM_AMOUNT = 0.1
_LIBRARY_PATH = '/usr/share/library-common/index.html'
_WEB_SCHEMES = ['http', 'https', 'ftp', 'file', 'javascript', 'data',
'about', 'gopher', 'mailto']
_NON_SEARCH_REGEX = re.compile('''
(^localhost(\\.[^\s]+)?(:\\d+)?(/.*)?$|
^[0-9]+\\.[0-9]+\\.[0-9]+\\.[0-9]$|
^::[0-9a-f:]*$| # IPv6 literals
^[0-9a-f:]+:[0-9a-f:]*$| # IPv6 literals
^[^\\.\s]+\\.[^\\.\s]+.*$| # foo.bar...
^https?://[^/\\.\s]+.*$|
^about:.*$|
^data:.*$|
^file:.*$)
''', re.VERBOSE)
DEFAULT_ERROR_PAGE = os.path.join(activity.get_bundle_path(),
'data/error_page.tmpl')
class CommandListener(object):
def __init__(self, window):
self._window = window
def handleEvent(self, event):
if not event.isTrusted:
return
uri = event.originalTarget.ownerDocument.documentURI
if not uri.startswith('about:neterror?e=nssBadCert'):
return
cls = components.classes['@sugarlabs.org/add-cert-exception;1']
cert_exception = cls.createInstance(interfaces.hulahopAddCertException)
cert_exception.showDialog(self._window)
class TabbedView(BrowserNotebook):
__gtype_name__ = 'TabbedView'
__gsignals__ = {
'focus-url-entry': (GObject.SignalFlags.RUN_FIRST,
None,
([])),
}
def __init__(self):
BrowserNotebook.__init__(self)
self.props.show_border = False
self.props.scrollable = True
# Used to connect and disconnect functions when 'switch-page'
self._browser = None
self._load_status_changed_hid = None
self.connect('size-allocate', self.__size_allocate_cb)
self.connect('page-added', self.__page_added_cb)
self.connect('page-removed', self.__page_removed_cb)
self.connect_after('switch-page', self.__switch_page_cb)
self.add_tab()
self._update_closing_buttons()
self._update_tab_sizes()
def __switch_page_cb(self, tabbed_view, page, page_num):
if tabbed_view.get_n_pages():
self._connect_to_browser(tabbed_view.props.current_browser)
def _connect_to_browser(self, browser):
if self._browser is not None:
self._browser.disconnect(self._load_status_changed_hid)
self._browser = browser
self._load_status_changed_hid = self._browser.connect(
'notify::load-status', self.__load_status_changed_cb)
def normalize_or_autosearch_url(self, url):
"""Normalize the url input or return a url for search.
We use SoupURI as an indication of whether the value given in url
is not something we want to search; we only do that, though, if
the address has a web scheme, because SoupURI will consider any
string: as a valid scheme, and we will end up prepending http://
to it.
This code is borrowed from Epiphany.
url -- input string that can be normalized to an url or serve
as search
Return: a string containing a valid url
"""
def has_web_scheme(address):
if address == '':
return False
scheme, sep, after = address.partition(':')
if sep == '':
return False
return scheme in _WEB_SCHEMES
soup_uri = None
effective_url = None
if has_web_scheme(url):
try:
soup_uri = Soup.URI.new(url)
except TypeError:
pass
if soup_uri is None and not _NON_SEARCH_REGEX.match(url):
# Get the user's LANG to use as default language of
# the results
locale = os.environ.get('LANG', '')
language_location = locale.split('.', 1)[0].lower()
language = language_location.split('_')[0]
# If the string doesn't look like an URI, let's search it:
url_search = 'http://www.google.com/search?' \
'q=%(query)s&ie=UTF-8&oe=UTF-8&hl=%(language)s'
query_param = Soup.form_encode_hash({'q': url})
# [2:] here is getting rid of 'q=':
effective_url = url_search % {'query': query_param[2:],
'language': language}
else:
if has_web_scheme(url):
effective_url = url
else:
effective_url = 'http://' + url
return effective_url
def __size_allocate_cb(self, widget, allocation):
self._update_tab_sizes()
def __page_added_cb(self, notebook, child, pagenum):
self._update_closing_buttons()
self._update_tab_sizes()
def __page_removed_cb(self, notebook, child, pagenum):
if self.get_n_pages():
self._update_closing_buttons()
self._update_tab_sizes()
def __new_tab_cb(self, browser, url):
new_browser = self.add_tab(next_to_current=True)
new_browser.load_uri(url)
new_browser.grab_focus()
def __create_web_view_cb(self, web_view, frame):
new_web_view = Browser()
new_web_view.connect('web-view-ready', self.__web_view_ready_cb)
return new_web_view
def __web_view_ready_cb(self, web_view):
"""
Handle new window requested and open it in a new tab.
This callback is called when the WebKit.WebView request for a
new window to open (for example a call to the Javascript
function 'window.open()' or target="_blank")
web_view -- the new browser there the url of the
window.open() call will be loaded.
This object is created in the signal callback
'create-web-view'.
"""
web_view.connect('new-tab', self.__new_tab_cb)
web_view.connect('open-pdf', self.__open_pdf_in_new_tab_cb)
web_view.connect('create-web-view', self.__create_web_view_cb)
web_view.grab_focus()
self._insert_tab_next(web_view)
def __open_pdf_in_new_tab_cb(self, browser, url):
tab_page = PDFTabPage()
tab_page.browser.connect('new-tab', self.__new_tab_cb)
tab_page.browser.connect('tab-close', self.__tab_close_cb)
label = TabLabel(tab_page.browser)
label.connect('tab-close', self.__tab_close_cb, tab_page)
next_index = self.get_current_page() + 1
self.insert_page(tab_page, label, next_index)
tab_page.show()
label.show()
self.set_current_page(next_index)
tab_page.setup(url)
def __load_status_changed_cb(self, widget, param):
if self.get_window() is None:
return
status = widget.get_load_status()
if status in (WebKit.LoadStatus.PROVISIONAL,
WebKit.LoadStatus.COMMITTED,
WebKit.LoadStatus.FIRST_VISUALLY_NON_EMPTY_LAYOUT):
self.get_window().set_cursor(Gdk.Cursor(Gdk.CursorType.WATCH))
elif status in (WebKit.LoadStatus.FAILED,
WebKit.LoadStatus.FINISHED):
self.get_window().set_cursor(Gdk.Cursor(Gdk.CursorType.LEFT_PTR))
def add_tab(self, next_to_current=False):
browser = Browser()
browser.connect('new-tab', self.__new_tab_cb)
browser.connect('open-pdf', self.__open_pdf_in_new_tab_cb)
browser.connect('web-view-ready', self.__web_view_ready_cb)
browser.connect('create-web-view', self.__create_web_view_cb)
if next_to_current:
self._insert_tab_next(browser)
else:
self._append_tab(browser)
self.emit('focus-url-entry')
return browser
def _insert_tab_next(self, browser):
tab_page = TabPage(browser)
label = TabLabel(browser)
label.connect('tab-close', self.__tab_close_cb, tab_page)
next_index = self.get_current_page() + 1
self.insert_page(tab_page, label, next_index)
tab_page.show()
self.set_current_page(next_index)
def _append_tab(self, browser):
tab_page = TabPage(browser)
label = TabLabel(browser)
label.connect('tab-close', self.__tab_close_cb, tab_page)
self.append_page(tab_page, label)
tab_page.show()
self.set_current_page(-1)
def on_add_tab(self, gobject):
self.add_tab()
def __tab_close_cb(self, label, tab_page):
if tab_page.props.browser == self.props.current_browser:
# Current browser was just closed. The next tab of it has
# to take the focus.
current_page_num = self.page_num(tab_page)
if self.get_n_pages() - 1 == current_page_num:
# This tab was the last. Grab the left one.
page_to_focus = current_page_num - 1
else:
# This tab was in the middle. Grab the right one.
page_to_focus = current_page_num + 1
nth_page = self.get_nth_page(page_to_focus)
nth_page.props.browser.grab_focus()
self.remove_page(self.page_num(tab_page))
tab_page.destroy()
def _update_tab_sizes(self):
"""Update tab widths based in the amount of tabs."""
n_pages = self.get_n_pages()
canvas_size = self.get_allocation()
allowed_size = canvas_size.width
if n_pages == 1:
# use half of the whole space
tab_expand = False
tab_new_size = int(allowed_size / 2)
elif n_pages <= 8: # ensure eight tabs
tab_expand = True # use all the space available by tabs
tab_new_size = -1
else:
# scroll the tab toolbar if there are more than 8 tabs
tab_expand = False
tab_new_size = (allowed_size / 8)
for page_idx in range(n_pages):
page = self.get_nth_page(page_idx)
label = self.get_tab_label(page)
self.child_set_property(page, 'tab-expand', tab_expand)
label.update_size(tab_new_size)
def _update_closing_buttons(self):
"""Prevent closing the last tab."""
first_page = self.get_nth_page(0)
first_label = self.get_tab_label(first_page)
if self.get_n_pages() == 1:
first_label.hide_close_button()
else:
first_label.show_close_button()
def load_homepage(self):
browser = self.current_browser
if os.path.isfile(_LIBRARY_PATH):
browser.load_uri('file://' + _LIBRARY_PATH)
else:
default_page = os.path.join(activity.get_bundle_path(),
"data/index.html")
browser.load_uri('file://' + default_page)
browser.grab_focus()
def _get_current_browser(self):
if self.get_n_pages():
return self.get_nth_page(self.get_current_page()).browser
else:
return None
current_browser = GObject.property(type=object,
getter=_get_current_browser)
def get_history(self):
tab_histories = []
for index in xrange(0, self.get_n_pages()):
tab_page = self.get_nth_page(index)
tab_histories.append(tab_page.browser.get_history())
return tab_histories
def set_history(self, tab_histories):
if tab_histories and isinstance(tab_histories[0], dict):
# Old format, no tabs
tab_histories = [tab_histories]
while self.get_n_pages():
self.remove_page(self.get_n_pages() - 1)
def is_pdf_history(tab_history):
return (len(tab_history) == 1 and
tab_history[0]['url'].lower().endswith('pdf'))
for tab_history in tab_histories:
if is_pdf_history(tab_history):
url = tab_history[0]['url']
tab_page = PDFTabPage()
tab_page.browser.connect('new-tab', self.__new_tab_cb)
tab_page.browser.connect('tab-close', self.__tab_close_cb)
label = TabLabel(tab_page.browser)
label.connect('tab-close', self.__tab_close_cb, tab_page)
self.append_page(tab_page, label)
tab_page.show()
label.show()
tab_page.setup(url, title=tab_history[0]['title'])
else:
browser = Browser()
browser.connect('new-tab', self.__new_tab_cb)
browser.connect('open-pdf', self.__open_pdf_in_new_tab_cb)
browser.connect('web-view-ready', self.__web_view_ready_cb)
browser.connect('create-web-view', self.__create_web_view_cb)
self._append_tab(browser)
browser.set_history(tab_history)
def is_current_page_pdf(self):
index = self.get_current_page()
current_page = self.get_nth_page(index)
return isinstance(current_page, PDFTabPage)
Gtk.rc_parse_string('''
style "browse-tab-close" {
xthickness = 0
ythickness = 0
}
widget "*browse-tab-close" style "browse-tab-close"''')
class TabPage(Gtk.ScrolledWindow):
__gtype_name__ = 'BrowseTabPage'
def __init__(self, browser):
GObject.GObject.__init__(self)
self._browser = browser
self.add(browser)
browser.show()
def _get_browser(self):
return self._browser
browser = GObject.property(type=object,
getter=_get_browser)
class TabLabel(Gtk.HBox):
__gtype_name__ = 'BrowseTabLabel'
__gsignals__ = {
'tab-close': (GObject.SignalFlags.RUN_FIRST,
None,
([])),
}
def __init__(self, browser):
GObject.GObject.__init__(self)
browser.connect('notify::title', self.__title_changed_cb)
browser.connect('notify::load-status', self.__load_status_changed_cb)
self._title = _('Untitled')
self._label = Gtk.Label(label=self._title)
self._label.set_ellipsize(Pango.EllipsizeMode.END)
self._label.set_alignment(0, 0.5)
self.pack_start(self._label, True, True, 0)
self._label.show()
close_tab_icon = Icon(icon_name='browse-close-tab')
button = Gtk.Button()
button.props.relief = Gtk.ReliefStyle.NONE
button.props.focus_on_click = False
icon_box = Gtk.HBox()
icon_box.pack_start(close_tab_icon, True, False, 0)
button.add(icon_box)
button.connect('clicked', self.__button_clicked_cb)
button.set_name('browse-tab-close')
self.pack_start(button, False, True, 0)
close_tab_icon.show()
icon_box.show()
button.show()
self._close_button = button
def update_size(self, size):
self.set_size_request(size, -1)
def hide_close_button(self):
self._close_button.hide()
def show_close_button(self):
self._close_button.show()
def __button_clicked_cb(self, button):
self.emit('tab-close')
def __title_changed_cb(self, widget, param):
title = widget.props.title
if not title:
title = os.path.basename(widget.props.uri)
self._label.set_text(title)
self._title = title
def __load_status_changed_cb(self, widget, param):
status = widget.get_load_status()
if status == WebKit.LoadStatus.FAILED:
self._label.set_text(self._title)
elif WebKit.LoadStatus.PROVISIONAL <= status \
< WebKit.LoadStatus.FINISHED:
self._label.set_text(_('Loading...'))
elif status == WebKit.LoadStatus.FINISHED:
if widget.props.title == None:
self._label.set_text(_('Untitled'))
self._title = _('Untitled')
class Browser(WebKit.WebView):
__gtype_name__ = 'Browser'
__gsignals__ = {
'new-tab': (GObject.SignalFlags.RUN_FIRST,
None,
([str])),
'open-pdf': (GObject.SignalFlags.RUN_FIRST,
None,
([str])),
'security-status-changed': (GObject.SignalFlags.RUN_FIRST,
None,
([])),
}
CURRENT_SUGAR_VERSION = '0.98'
SECURITY_STATUS_SECURE = 1
SECURITY_STATUS_INSECURE = 2
def __init__(self):
WebKit.WebView.__init__(self)
web_settings = self.get_settings()
# Add SugarLabs user agent:
identifier = ' SugarLabs/' + self.CURRENT_SUGAR_VERSION
web_settings.props.user_agent += identifier
# Change font size based in the GtkSettings font size. The
# gtk-font-name property is a string with format '[font name]
# [font size]' like 'Sans Serif 10'.
gtk_settings = Gtk.Settings.get_default()
gtk_font_name = gtk_settings.get_property('gtk-font-name')
gtk_font_size = float(gtk_font_name.split()[-1])
web_settings.props.default_font_size = gtk_font_size * 1.2
web_settings.props.default_monospace_font_size = \
gtk_font_size * 1.2 - 2
self.set_settings(web_settings)
# Scale text and graphics:
self.set_full_content_zoom(True)
# This property is used to set the title immediatly the user
# presses Enter on the URL Entry
self.loading_uri = None
self.security_status = None
# Reference to the global history and callbacks to handle it:
self._global_history = globalhistory.get_global_history()
self.connect('notify::load-status', self.__load_status_changed_cb)
self.connect('notify::title', self.__title_changed_cb)
self.connect('download-requested', self.__download_requested_cb)
self.connect('mime-type-policy-decision-requested',
self.__mime_type_policy_cb)
self.connect('load-error', self.__load_error_cb)
ContentInvoker(self)
try:
self.connect('run-file-chooser', self.__run_file_chooser)
except TypeError:
# Only present in WebKit1 > 1.9.3 and WebKit2
pass
def get_history(self):
"""Return the browsing history of this browser."""
back_forward_list = self.get_back_forward_list()
items_list = self._items_history_as_list(back_forward_list)
# If this is an empty tab, return an empty history:
if len(items_list) == 1 and items_list[0] is None:
return []
history = []
for item in items_list:
history.append({'url': item.get_uri(),
'title': item.get_title()})
return history
def set_history(self, history):
"""Restore the browsing history for this browser."""
back_forward_list = self.get_back_forward_list()
back_forward_list.clear()
for entry in history:
uri, title = entry['url'], entry['title']
history_item = WebKit.WebHistoryItem.new_with_data(uri, title)
back_forward_list.add_item(history_item)
def get_history_index(self):
"""Return the index of the current item in the history."""
back_forward_list = self.get_back_forward_list()
history_list = self._items_history_as_list(back_forward_list)
current_item = back_forward_list.get_current_item()
return history_list.index(current_item)
def set_history_index(self, index):
"""Go to the item in the history specified by the index."""
back_forward_list = self.get_back_forward_list()
current_item = index - back_forward_list.get_back_length()
item = back_forward_list.get_nth_item(current_item)
if item is not None:
self.go_to_back_forward_item(item)
def _items_history_as_list(self, history):
"""Return a list with the items of a WebKit.WebBackForwardList."""
back_items = []
for n in reversed(range(1, history.get_back_length() + 1)):
item = history.get_nth_item(n * -1)
back_items.append(item)
current_item = [history.get_current_item()]
forward_items = []
for n in range(1, history.get_forward_length() + 1):
item = history.get_nth_item(n)
forward_items.append(item)
all_items = back_items + current_item + forward_items
return all_items
def get_source(self, async_cb, async_err_cb):
data_source = self.get_main_frame().get_data_source()
data = data_source.get_data()
if data_source.is_loading() or data is None:
async_err_cb()
temp_path = os.path.join(activity.get_activity_root(), 'instance')
file_path = os.path.join(temp_path, '%i' % time.time())
file_handle = file(file_path, 'w')
file_handle.write(data.str)
file_handle.close()
async_cb(file_path)
def open_new_tab(self, url):
self.emit('new-tab', url)
def __run_file_chooser(self, browser, request):
picker = FilePicker(self)
chosen = picker.run()
picker.destroy()
if chosen:
request.select_files([chosen])
elif hasattr(request, 'cancel'):
# WebKit2 only
request.cancel()
return True
def __load_status_changed_cb(self, widget, param):
status = widget.get_load_status()
if status <= WebKit.LoadStatus.COMMITTED:
# Add the url to the global history or update it.
uri = self.get_uri()
self._global_history.add_page(uri)
if status == WebKit.LoadStatus.COMMITTED:
# Update the security status.
response = widget.get_main_frame().get_network_response()
message = response.get_message()
if message:
use_https, certificate, tls_errors = message.get_https_status()
if use_https:
if tls_errors == 0:
self.security_status = self.SECURITY_STATUS_SECURE
else:
self.security_status = self.SECURITY_STATUS_INSECURE
else:
self.security_status = None
self.emit('security-status-changed')
def __title_changed_cb(self, widget, param):
"""Update title in global history."""
uri = self.get_uri()
if self.props.title is not None:
title = self.props.title
if not isinstance(title, unicode):
title = unicode(title, 'utf-8')
self._global_history.set_page_title(uri, title)
def __mime_type_policy_cb(self, webview, frame, request, mimetype,
policy_decision):
"""Handle downloads and PDF files."""
if mimetype == 'application/pdf':
self.emit('open-pdf', request.get_uri())
policy_decision.ignore()
return True
elif not self.can_show_mime_type(mimetype):
policy_decision.download()
return True
return False
def __download_requested_cb(self, browser, download):
downloadmanager.add_download(download, browser)
return True
def __load_error_cb(self, web_view, web_frame, uri, web_error):
"""Show Sugar's error page"""
# Don't show error page if the load was interrupted by policy
# change or the request is going to be handled by a
# plugin. For example, if a file was requested for download or
# an .ogg file is going to be played.
if web_error.code in (WebKit.PolicyError.\
FRAME_LOAD_INTERRUPTED_BY_POLICY_CHANGE,
WebKit.PluginError.WILL_HANDLE_LOAD):
return True
data = {
'page_title': _('This web page could not be loaded'),
'title': _('This web page could not be loaded'),
'message': _('"%s" could not be loaded. Please check for '
'typing errors, and make sure you are connected '
'to the internet.') % uri,
'btn_value': _('Try again'),
'url': uri,
}
html = open(DEFAULT_ERROR_PAGE, 'r').read() % data
web_frame.load_alternate_string(html, uri, uri)
return True
class PopupDialog(Gtk.Window):
def __init__(self):
GObject.GObject.__init__(self)
self.set_type_hint(Gdk.WindowTypeHint.DIALOG)
border = style.GRID_CELL_SIZE
self.set_default_size(Gdk.Screen.width() - border * 2,
Gdk.Screen.height() - border * 2)
self.view = WebKit.WebView()
self.view.connect('notify::visibility', self.__notify_visibility_cb)
self.add(self.view)
self.view.realize()
def __notify_visibility_cb(self, web_view, pspec):
if self.view.props.visibility:
self.view.show()
self.show()
|
ExtUtils::MakeMaker - Create a module Makefile
use ExtUtils::MakeMaker;
WriteMakefile(
NAME => "Foo::Bar",
VERSION_FROM => "lib/Foo/Bar.pm",
);
This utility is designed to write a Makefile for an extension module from a Makefile.PL. It is based on the Makefile.SH model provided by Andy Dougherty and the perl5-porters.
It splits the task of generating the Makefile into several subroutines that can be individually overridden. Each subroutine returns the text it wishes to have written to the Makefile.
As there are various Make programs with incompatible syntax, which use operating system shells, again with incompatible syntax, it is important for users of this module to know which flavour of Make a Makefile has been written for so they'll use the correct one and won't have to face the possibly bewildering errors resulting from using the wrong one.
On POSIX systems, that program will likely be GNU Make; on Microsoft Windows, it will be either Microsoft NMake, DMake or GNU Make. See the section on the "MAKE" parameter for details.
ExtUtils::MakeMaker (EUMM) is object oriented. Each directory below the current directory that contains a Makefile.PL is treated as a separate object. This makes it possible to write an unlimited number of Makefiles with a single invocation of WriteMakefile().
All inputs to WriteMakefile are Unicode characters, not just octets. EUMM seeks to handle all of these correctly. It is currently still not possible to portably use Unicode characters in module names, because this requires Perl to handle Unicode filenames, which is not yet the case on Windows.
The long answer is the rest of the manpage :-)
The generated Makefile enables the user of the extension to invoke
perl Makefile.PL # optionally "perl Makefile.PL verbose"
make
make test # optionally set TEST_VERBOSE=1
make install # See below
The Makefile to be produced may be altered by adding arguments of the form KEY=VALUE. E.g.
perl Makefile.PL INSTALL_BASE=~
Other interesting targets in the generated Makefile are
make config # to check if the Makefile is up-to-date
make clean # delete local temp files (Makefile gets renamed)
make realclean # delete derived files (including ./blib)
make ci # check in all the files in the MANIFEST file
make dist # see below the Distribution Support section
MakeMaker checks for the existence of a file named test.pl in the current directory, and if it exists it executes the script with the proper set of perl -I options.
MakeMaker also checks for any files matching glob("t/*.t"). It will execute all matching files in alphabetical order via the Test::Harness module with the -I switches set correctly.
You can also organize your tests within subdirectories in the t/ directory. To do so, use the test directive in your Makefile.PL. For example, if you had tests in:
t/foo
t/foo/bar
You could tell make to run tests in both of those directories with the following directives:
test => {TESTS => 't/*/*.t t/*/*/*.t'}
test => {TESTS => 't/foo/*.t t/foo/bar/*.t'}
The first will run all test files in all first-level subdirectories and all subdirectories they contain. The second will run tests in only the t/foo and t/foo/bar.
If you'd like to see the raw output of your tests, set the TEST_VERBOSE variable to true.
make test TEST_VERBOSE=1
If you want to run particular test files, set the TEST_FILES variable. It is possible to use globbing with this mechanism.
make test TEST_FILES='t/foobar.t t/dagobah*.t'
Windows users who are using nmake should note that due to a bug in nmake, when specifying TEST_FILES you must use back-slashes instead of forward-slashes.
nmake test TEST_FILES='t\foobar.t t\dagobah*.t'
A useful variation of the above is the target testdb. It runs the test under the Perl debugger (see perldebug). If the file test.pl exists in the current directory, it is used for the test.
If you want to debug some other testfile, set the TEST_FILE variable thusly:
make testdb TEST_FILE=t/mytest.t
By default the debugger is called using -d option to perl. If you want to specify some other option, set the TESTDB_SW variable:
make testdb TESTDB_SW=-Dx
make alone puts all relevant files into directories that are named by the macros INST_LIB, INST_ARCHLIB, INST_SCRIPT, INST_MAN1DIR and INST_MAN3DIR. All these default to something below ./blib if you are not building below the perl source directory. If you are building below the perl source, INST_LIB and INST_ARCHLIB default to ../../lib, and INST_SCRIPT is not defined.
The install target of the generated Makefile copies the files found below each of the INST_* directories to their INSTALL* counterparts. Which counterparts are chosen depends on the setting of INSTALLDIRS according to the following table:
INSTALLDIRS set to perl site vendor PERLPREFIX SITEPREFIX VENDORPREFIXINST_ARCHLIB INSTALLARCHLIB INSTALLSITEARCH INSTALLVENDORARCHINST_LIB INSTALLPRIVLIB INSTALLSITELIB INSTALLVENDORLIBINST_BIN INSTALLBIN INSTALLSITEBIN INSTALLVENDORBININST_SCRIPT INSTALLSCRIPT INSTALLSITESCRIPT INSTALLVENDORSCRIPTINST_MAN1DIR INSTALLMAN1DIR INSTALLSITEMAN1DIR INSTALLVENDORMAN1DIRINST_MAN3DIR INSTALLMAN3DIR INSTALLSITEMAN3DIR INSTALLVENDORMAN3DIR
The INSTALL... macros in turn default to their %Config ($Config{installprivlib}, $Config{installarchlib}, etc.) counterparts.
You can check the values of these variables on your system with
perl '-V:install.*'
And to check the sequence in which the library directories are searched by perl, run
perl -le 'print join $/, @INC'
Sometimes older versions of the module you're installing live in other directories in @INC. Because Perl loads the first version of a module it finds, not the newest, you might accidentally get one of these older versions even after installing a brand new version. To delete all other versions of the module you're installing (not simply older ones) set the UNINST variable.
make install UNINST=1
INSTALL_BASE can be passed into Makefile.PL to change where your module will be installed. INSTALL_BASE is more like what everyone else calls "prefix" than PREFIX is.
To have everything installed in your home directory, do the following.
# Unix users, INSTALL_BASE=~ works fine
perl Makefile.PL INSTALL_BASE=/path/to/your/home/dir
Like PREFIX, it sets several INSTALL* attributes at once. Unlike PREFIX it is easy to predict where the module will end up. The installation pattern looks like this:
INSTALLARCHLIB INSTALL_BASE/lib/perl5/$Config{archname}
INSTALLPRIVLIB INSTALL_BASE/lib/perl5
INSTALLBIN INSTALL_BASE/bin
INSTALLSCRIPT INSTALL_BASE/bin
INSTALLMAN1DIR INSTALL_BASE/man/man1
INSTALLMAN3DIR INSTALL_BASE/man/man3
INSTALL_BASE in MakeMaker and --install_base in Module::Build (as of 0.28) install to the same location. If you want MakeMaker and Module::Build to install to the same location simply set INSTALL_BASE and --install_base to the same location.
INSTALL_BASE was added in 6.31.
PREFIX and LIB can be used to set several INSTALL* attributes in one go. Here's an example for installing into your home directory.
# Unix users, PREFIX=~ works fine
perl Makefile.PL PREFIX=/path/to/your/home/dir
This will install all files in the module under your home directory, with man pages and libraries going into an appropriate place (usually ~/man and ~/lib). How the exact location is determined is complicated and depends on how your Perl was configured. INSTALL_BASE works more like what other build systems call "prefix" than PREFIX and we recommend you use that instead.
Another way to specify many INSTALL directories with a single parameter is LIB.
perl Makefile.PL LIB=~/lib
This will install the module's architecture-independent files into ~/lib, the architecture-dependent files into ~/lib/$archname.
Note, that in both cases the tilde expansion is done by MakeMaker, not by perl by default, nor by make.
Conflicts between parameters LIB, PREFIX and the various INSTALL* arguments are resolved so that:
setting LIB overrides any setting of INSTALLPRIVLIB, INSTALLARCHLIB, INSTALLSITELIB, INSTALLSITEARCH (and they are not affected by PREFIX);
without LIB, setting PREFIX replaces the initial $Config{prefix} part of those INSTALL* arguments, even if the latter are explicitly set (but are set to still start with $Config{prefix}).
If the user has superuser privileges, and is not working on AFS or relatives, then the defaults for INSTALLPRIVLIB, INSTALLARCHLIB, INSTALLSCRIPT, etc. will be appropriate, and this incantation will be the best:
perl Makefile.PL;
make;
make test
make install
make install by default writes some documentation of what has been done into the file $(INSTALLARCHLIB)/perllocal.pod. This feature can be bypassed by calling make pure_install.
will have to specify the installation directories as these most probably have changed since perl itself has been installed. They will have to do this by calling
perl Makefile.PL INSTALLSITELIB=/afs/here/today \
INSTALLSCRIPT=/afs/there/now INSTALLMAN3DIR=/afs/for/manpages
make
Be careful to repeat this procedure every time you recompile an extension, unless you are sure the AFS installation directories are still valid.
An extension that is built with the above steps is ready to use on systems supporting dynamic loading. On systems that do not support dynamic loading, any newly created extension has to be linked together with the available resources. MakeMaker supports the linking process by creating appropriate targets in the Makefile whenever an extension is built. You can invoke the corresponding section of the makefile with
make perl
That produces a new perl binary in the current directory with all extensions linked in that can be found in INST_ARCHLIB, SITELIBEXP, and PERL_ARCHLIB. To do that, MakeMaker writes a new Makefile, on UNIX, this is called Makefile.aperl (may be system dependent). If you want to force the creation of a new perl, it is recommended that you delete this Makefile.aperl, so the directories are searched through for linkable libraries again.
The binary can be installed into the directory where perl normally resides on your machine with
make inst_perl
To produce a perl binary with a different name than perl, either say
perl Makefile.PL MAP_TARGET=myperl
make myperl
make inst_perl
or say
perl Makefile.PL
make myperl MAP_TARGET=myperl
make inst_perl MAP_TARGET=myperl
In any case you will be prompted with the correct invocation of the inst_perl target that installs the new binary into INSTALLBIN.
make inst_perl by default writes some documentation of what has been done into the file $(INSTALLARCHLIB)/perllocal.pod. This can be bypassed by calling make pure_inst_perl.
Warning: the inst_perl: target will most probably overwrite your existing perl binary. Use with care!
Sometimes you might want to build a statically linked perl although your system supports dynamic loading. In this case you may explicitly set the linktype with the invocation of the Makefile.PL or make:
perl Makefile.PL LINKTYPE=static # recommended
or
make LINKTYPE=static # works on most systems
MakeMaker needs to know, or to guess, where certain things are located. Especially INST_LIB and INST_ARCHLIB (where to put the files during the make(1) run), PERL_LIB and PERL_ARCHLIB (where to read existing modules from), and PERL_INC (header files and libperl*.*).
Extensions may be built either using the contents of the perl source directory tree or from the installed perl library. The recommended way is to build extensions after you have run 'make install' on perl itself. You can do that in any directory on your hard disk that is not below the perl source tree. The support for extensions below the ext directory of the perl distribution is only good for the standard extensions that come with perl.
If an extension is being built below the ext/ directory of the perl source then MakeMaker will set PERL_SRC automatically (e.g., ../..). If PERL_SRC is defined and the extension is recognized as a standard extension, then other variables default to the following:
PERL_INC = PERL_SRC
PERL_LIB = PERL_SRC/lib
PERL_ARCHLIB = PERL_SRC/lib
INST_LIB = PERL_LIB
INST_ARCHLIB = PERL_ARCHLIB
If an extension is being built away from the perl source then MakeMaker will leave PERL_SRC undefined and default to using the installed copy of the perl library. The other variables default to the following:
PERL_INC = $archlibexp/CORE
PERL_LIB = $privlibexp
PERL_ARCHLIB = $archlibexp
INST_LIB = ./blib/lib
INST_ARCHLIB = ./blib/arch
If perl has not yet been installed then PERL_SRC can be defined on the command line as shown in the previous section.
If you don't want to keep the defaults for the INSTALL* macros, MakeMaker helps you to minimize the typing needed: the usual relationship between INSTALLPRIVLIB and INSTALLARCHLIB is determined by Configure at perl compilation time. MakeMaker supports the user who sets INSTALLPRIVLIB. If INSTALLPRIVLIB is set, but INSTALLARCHLIB not, then MakeMaker defaults the latter to be the same subdirectory of INSTALLPRIVLIB as Configure decided for the counterparts in %Config, otherwise it defaults to INSTALLPRIVLIB. The same relationship holds for INSTALLSITELIB and INSTALLSITEARCH.
MakeMaker gives you much more freedom than needed to configure internal variables and get different results. It is worth mentioning that make(1) also lets you configure most of the variables that are used in the Makefile. But in the majority of situations this will not be necessary, and should only be done if the author of a package recommends it (or you know what you're doing).
The following attributes may be specified as arguments to WriteMakefile() or as NAME=VALUE pairs on the command line. Attributes that became available with later versions of MakeMaker are indicated.
In order to maintain portability of attributes with older versions of MakeMaker you may want to use App::EUMM::Upgrade with your Makefile.PL.
One line description of the module. Will be included in PPD file.
Name of the file that contains the package description. MakeMaker looks for a line in the POD matching /^($package\s-\s)(.*)/. This is typically the first line in the "=head1 NAME" section. $2 becomes the abstract.
Array of strings containing name (and email address) of package author(s). Is used in CPAN Meta files (META.yml or META.json) and PPD (Perl Package Description) files for PPM (Perl Package Manager).
Used when creating PPD files for binary packages. It can be set to a full or relative path or URL to the binary archive for a particular architecture. For example:
perl Makefile.PL BINARY_LOCATION=x86/Agent.tar.gz
builds a PPD package that references a binary of the Agent package, located in the x86 directory relative to the PPD itself.
Available in version 6.5503 and above.
A hash of modules that are needed to build your module but not run it.
This will go into the build_requires field of your META.yml and the build of the prereqs field of your META.json.
Defaults to { "ExtUtils::MakeMaker" => 0 } if this attribute is not specified.
The format is the same as PREREQ_PM.
Ref to array of *.c file names. Initialised from a directory scan and the values portion of the XS attribute hash. This is not currently used by MakeMaker but may be handy in Makefile.PLs.
String that will be included in the compiler call command line between the arguments INC and OPTIMIZE.
Arrayref. E.g. [qw(archname manext)] defines ARCHNAME & MANEXT from config.sh. MakeMaker will add to CONFIG the following values anyway: ar cc cccdlflags ccdlflags dlext dlsrc ld lddlflags ldflags libc lib_ext obj_ext ranlib sitelibexp sitearchexp so
CODE reference. The subroutine should return a hash reference. The hash may contain further attributes, e.g. {LIBS => ...}, that have to be determined by some evaluation method.
Available in version 6.52 and above.
A hash of modules that are required to run Makefile.PL itself, but not to run your distribution.
This will go into the configure_requires field of your META.yml and the configure of the prereqs field of your META.json.
Defaults to { "ExtUtils::MakeMaker" => 0 } if this attribute is not specified.
The format is the same as PREREQ_PM.
Something like "-DHAVE_UNISTD_H"
This is the root directory into which the code will be installed. It prepends itself to the normal prefix. For example, if your code would normally go into /usr/local/lib/perl you could set DESTDIR=~/tmp/ and installation would go into ~/tmp/usr/local/lib/perl.
This is primarily of use for people who repackage Perl modules.
NOTE: Due to the nature of make, it is important that you put the trailing slash on your DESTDIR. ~/tmp/ not ~/tmp.
Ref to array of subdirectories containing Makefile.PLs e.g. ['sdbm'] in ext/SDBM_File
A safe filename for the package.
Defaults to NAME below but with :: replaced with -.
For example, Foo::Bar becomes Foo-Bar.
Your name for distributing the package with the version number included. This is used by 'make dist' to name the resulting archive file.
Defaults to DISTNAME-VERSION.
For example, version 1.04 of Foo::Bar becomes Foo-Bar-1.04.
On some OS's where . has special meaning VERSION_SYM may be used in place of VERSION.
Specifies the extension of the module's loadable object. For example:
DLEXT => 'unusual_ext', # Default value is $Config{so}
NOTE: When using this option to alter the extension of a module's loadable object, it is also necessary that the module's pm file specifies the same change:
local $DynaLoader::dl_dlext = 'unusual_ext';
Hashref of symbol names for routines to be made available as universal symbols. Each key/value pair consists of the package name and an array of routine names in that package. Used only under AIX, OS/2, VMS and Win32 at present. The routine names supplied will be expanded in the same way as XSUB names are expanded by the XS() macro. Defaults to
{"$(NAME)" => ["boot_$(NAME)" ] }
e.g.
{"RPC" => [qw( boot_rpcb rpcb_gettime getnetconfigent )],
"NetconfigPtr" => [ 'DESTROY'] }
Please see the ExtUtils::Mksymlists documentation for more information about the DL_FUNCS, DL_VARS and FUNCLIST attributes.
Array of symbol names for variables to be made available as universal symbols. Used only under AIX, OS/2, VMS and Win32 at present. Defaults to []. (e.g. [ qw(Foo_version Foo_numstreams Foo_tree ) ])
Array of extension names to exclude when doing a static build. This is ignored if INCLUDE_EXT is present. Consult INCLUDE_EXT for more details. (e.g. [ qw( Socket POSIX ) ] )
This attribute may be most useful when specified as a string on the command line: perl Makefile.PL EXCLUDE_EXT='Socket Safe'
Ref to array of executable files. The files will be copied to the INST_SCRIPT directory. Make realclean will delete them from there again.
If your executables start with something like #!perl or #!/usr/bin/perl MakeMaker will change this to the path of the perl 'Makefile.PL' was invoked with so the programs will be sure to run properly even if perl is not in /usr/bin/perl.
The name of the Makefile to be produced. This is used for the second Makefile that will be produced for the MAP_TARGET.
Defaults to 'Makefile' or 'Descrip.MMS' on VMS.
(Note: we couldn't use MAKEFILE because dmake uses this for something else).
Perl binary able to run this extension, load XS modules, etc...
Like PERLRUN, except it uses FULLPERL.
Like PERLRUNINST, except it uses FULLPERL.
This provides an alternate means to specify function names to be exported from the extension. Its value is a reference to an array of function names to be exported by the extension. These names are passed through unaltered to the linker options file.
Ref to array of *.h file names. Similar to C.
This attribute is used to specify names to be imported into the extension. Takes a hash ref.
It is only used on OS/2 and Win32.
Include file dirs eg: "-I/usr/5include -I/path/to/inc"
Array of extension names to be included when doing a static build. MakeMaker will normally build with all of the installed extensions when doing a static build, and that is usually the desired behavior. If INCLUDE_EXT is present then MakeMaker will build only with those extensions which are explicitly mentioned. (e.g. [ qw( Socket POSIX ) ])
It is not necessary to mention DynaLoader or the current extension when filling in INCLUDE_EXT. If the INCLUDE_EXT is mentioned but is empty then only DynaLoader and the current extension will be included in the build.
This attribute may be most useful when specified as a string on the command line: perl Makefile.PL INCLUDE_EXT='POSIX Socket Devel::Peek'
Used by 'make install', which copies files from INST_ARCHLIB to this directory if INSTALLDIRS is set to perl.
Directory to install binary files (e.g. tkperl) into if INSTALLDIRS=perl.
Determines which of the sets of installation directories to choose: perl, site or vendor. Defaults to site.
These directories get the man pages at 'make install' time if INSTALLDIRS=perl. Defaults to $Config{installman*dir}.
If set to 'none', no man pages will be installed.
Used by 'make install', which copies files from INST_LIB to this directory if INSTALLDIRS is set to perl.
Defaults to $Config{installprivlib}.
Used by 'make install' which copies files from INST_SCRIPT to this directory if INSTALLDIRS=perl.
Used by 'make install', which copies files from INST_ARCHLIB to this directory if INSTALLDIRS is set to site (default).
Used by 'make install', which copies files from INST_BIN to this directory if INSTALLDIRS is set to site (default).
Used by 'make install', which copies files from INST_LIB to this directory if INSTALLDIRS is set to site (default).
These directories get the man pages at 'make install' time if INSTALLDIRS=site (default). Defaults to $(SITEPREFIX)/man/man$(MAN*EXT).
If set to 'none', no man pages will be installed.
Used by 'make install' which copies files from INST_SCRIPT to this directory if INSTALLDIRS is set to site (default).
Used by 'make install', which copies files from INST_ARCHLIB to this directory if INSTALLDIRS is set to vendor.
Used by 'make install', which copies files from INST_BIN to this directory if INSTALLDIRS is set to vendor.
Used by 'make install', which copies files from INST_LIB to this directory if INSTALLDIRS is set to vendor.
These directories get the man pages at 'make install' time if INSTALLDIRS=vendor. Defaults to $(VENDORPREFIX)/man/man$(MAN*EXT).
If set to 'none', no man pages will be installed.
Used by 'make install' which copies files from INST_SCRIPT to this directory if INSTALLDIRS is set to vendor.
Same as INST_LIB for architecture dependent files.
Directory to put real binary files during 'make'. These will be copied to INSTALLBIN during 'make install'
Directory where we put library files of this extension while building it.
Directory to hold the man pages at 'make' time
Directory to hold the man pages at 'make' time
Directory where executable files should be installed during 'make'. Defaults to "./blib/script", just to have a dummy location during testing. make install will copy the files in INST_SCRIPT to INSTALLSCRIPT.
Program to be used to link libraries for dynamic loading.
Defaults to $Config{ld}.
Any special flags that might need to be passed to ld to create a shared library suitable for dynamic loading. It is up to the makefile to use it. (See "lddlflags" in Config)
Defaults to $Config{lddlflags}.
Defaults to "$(OBJECT)" and is used in the ld command to specify what files to link/load from (also see dynamic_lib below for how to specify ld flags)
LIB should only be set at perl Makefile.PL time but is allowed as a MakeMaker argument. It has the effect of setting both INSTALLPRIVLIB and INSTALLSITELIB to that value regardless any explicit setting of those arguments (or of PREFIX). INSTALLARCHLIB and INSTALLSITEARCH are set to the corresponding architecture subdirectory.
The filename of the perllibrary that will be used together with this extension. Defaults to libperl.a.
An anonymous array of alternative library specifications to be searched for (in order) until at least one library is found. E.g.
'LIBS' => ["-lgdbm", "-ldbm -lfoo", "-L/path -ldbm.nfs"]
Mind, that any element of the array contains a complete set of arguments for the ld command. So do not specify
'LIBS' => ["-ltcl", "-ltk", "-lX11"]
See ODBM_File/Makefile.PL for an example, where an array is needed. If you specify a scalar as in
'LIBS' => "-ltcl -ltk -lX11"
MakeMaker will turn it into an array with one element.
Available in version 6.31 and above.
The licensing terms of your distribution. Generally it's "perl_5" for the same license as Perl itself.
See CPAN::Meta::Spec for the list of options.
Defaults to "unknown".
'static' or 'dynamic' (default unless usedl=undef in config.sh). Should only be used to force static linking (also see linkext below).
When this is set to 1, OBJECT will be automagically derived from O_FILES.
Variant of make you intend to run the generated Makefile with. This parameter lets Makefile.PL know what make quirks to account for when generating the Makefile.
MakeMaker also honors the MAKE environment variable. This parameter takes precedence.
Currently the only significant values are 'dmake' and 'nmake' for Windows users, instructing MakeMaker to generate a Makefile in the flavour of DMake ("Dennis Vadura's Make") or Microsoft NMake respectively.
Defaults to $Config{make}, which may go looking for a Make program in your environment.
How are you supposed to know what flavour of Make a Makefile has been generated for if you didn't specify a value explicitly? Search the generated Makefile for the definition of the MAKE variable, which is used to recursively invoke the Make utility. That will tell you what Make you're supposed to invoke the Makefile with.
Boolean which tells MakeMaker that it should include the rules to make a perl. This is handled automatically as a switch by MakeMaker. The user normally does not need it.
When 'make clean' or similar is run, the $(FIRST_MAKEFILE) will be backed up at this location.
Defaults to $(FIRST_MAKEFILE).old or $(FIRST_MAKEFILE)_old on VMS.
Hashref of pod-containing files. MakeMaker will default this to all EXE_FILES files that include POD directives. The files listed here will be converted to man pages and installed as was requested at Configure time.
This hash should map POD files (or scripts containing POD) to the man file names under the blib/man1/ directory, as in the following example:
MAN1PODS => {
'doc/command.pod' => 'blib/man1/command.1',
'scripts/script.pl' => 'blib/man1/script.1',
}
Hashref that assigns to *.pm and *.pod files the files into which the manpages are to be written. MakeMaker parses all *.pod and *.pm files for POD directives. Files that contain POD will be the default keys of the MAN3PODS hashref. These will then be converted to man pages during make and will be installed during make install.
Example similar to MAN1PODS.
If it is intended that a new perl binary be produced, this variable may hold a name for that binary. Defaults to perl
Available in version 6.46 and above.
A hashref of items to add to the CPAN Meta file (META.yml or META.json).
They differ in how they behave if they have the same key as the default metadata. META_ADD will override the default value with its own. META_MERGE will merge its value with the default.
Unless you want to override the defaults, prefer META_MERGE so as to get the advantage of any future defaults.
Where prereqs are concerned, if META_MERGE is used, prerequisites are merged with their counterpart WriteMakefile() argument (PREREQ_PM is merged into {prereqs}{runtime}{requires}, BUILD_REQUIRES into {prereqs}{build}{requires}, CONFIGURE_REQUIRES into {prereqs}{configure}{requires}, and TEST_REQUIRES into {prereqs}{test}{requires}). When prereqs are specified with META_ADD, the only prerequisites added to the file come from the metadata, not WriteMakefile() arguments.
Note that these configuration options are only used for generating META.yml and META.json -- they are NOT used for MYMETA.yml and MYMETA.json. Therefore data in these fields should NOT be used for dynamic (user-side) configuration.
By default CPAN Meta specification 1.4 is used. In order to use CPAN Meta specification 2.0, indicate with meta-spec the version you want to use.
META_MERGE => {
"meta-spec" => { version => 2 },
resources => {
repository => {
type => 'git',
url => 'git://github.com/Perl-Toolchain-Gang/ExtUtils-MakeMaker.git',
web => 'https://github.com/Perl-Toolchain-Gang/ExtUtils-MakeMaker',
},
},
},
Available in version 6.48 and above.
The minimum required version of Perl for this distribution.
Either the 5.006001 or the 5.6.1 format is acceptable.
If the extension links to a library that it builds, set this to the name of the library (see SDBM_File)
The package representing the distribution. For example, Test::More or ExtUtils::MakeMaker. It will be used to derive information about the distribution such as the "DISTNAME", installation locations within the Perl library and where XS files will be looked for by default (see "XS").
NAME must be a valid Perl package name and it must have an associated .pm file. For example, Foo::Bar is a valid NAME and there must exist Foo/Bar.pm. Any XS code should be in Bar.xs unless stated otherwise.
Your distribution must have a NAME.
MakeMaker will figure out if an extension contains linkable code anywhere down the directory tree, and will set this variable accordingly, but you can speed it up a very little bit if you define this boolean variable yourself.
Command so make does not print the literal commands it's running.
By setting it to an empty string you can generate a Makefile that prints all commands. Mainly used in debugging MakeMaker itself.
Defaults to @.
Boolean. Attribute to inhibit descending into subdirectories.
When true, suppresses the generation and addition to the MANIFEST of the META.yml and META.json module meta-data files during 'make distdir'.
Defaults to false.
When true, suppresses the generation of MYMETA.yml and MYMETA.json module meta-data files during 'perl Makefile.PL'.
Defaults to false.
When true, suppresses the writing of packlist files for installs.
Defaults to false.
When true, suppresses the appending of installations to perllocal.
Defaults to false.
In general, any generated Makefile checks for the current version of MakeMaker and the version the Makefile was built under. If NO_VC is set, the version check is neglected. Do not write this into your Makefile.PL, use it interactively instead.
List of object files, defaults to '$(BASEEXT)$(OBJ_EXT)', but can be a long string or an array containing all object files, e.g. "tkpBind.o tkpButton.o tkpCanvas.o" or ["tkpBind.o", "tkpButton.o", "tkpCanvas.o"]
(Where BASEEXT is the last component of NAME, and OBJ_EXT is $Config{obj_ext}.)
Defaults to -O. Set it to -g to turn debugging on. The flag is passed to subdirectory makes.
Perl binary for tasks that can be done by miniperl. If it contains spaces or other shell metacharacters, it needs to be quoted in a way that protects them, since this value is intended to be inserted in a shell command line in the Makefile. E.g.:
# Perl executable lives in "C:/Program Files/Perl/bin"
# Normally you don't need to set this yourself!
$ perl Makefile.PL PERL='"C:/Program Files/Perl/bin/perl.exe" -w'
Set only when MakeMaker is building the extensions of the Perl core distribution.
The call to the program that is able to compile perlmain.c. Defaults to $(CC).
Same as for PERL_LIB, but for architecture dependent files.
Used only when MakeMaker is building the extensions of the Perl core distribution (because normally $(PERL_ARCHLIB) is automatically in @INC, and adding it would get in the way of PERL5LIB).
Directory containing the Perl library to use.
Used only when MakeMaker is building the extensions of the Perl core distribution (because normally $(PERL_LIB) is automatically in @INC, and adding it would get in the way of PERL5LIB).
defaults to 0. Should be set to TRUE if the extension can work with the memory allocation routines substituted by the Perl malloc() subsystem. This should be applicable to most extensions with exceptions of those
with bugs in memory allocations which are caught by Perl's malloc();
which interact with the memory allocator in other ways than via malloc(), realloc(), free(), calloc(), sbrk() and brk();
which rely on special alignment which is not provided by Perl's malloc().
NOTE. Neglecting to set this flag in any one of the loaded extension nullifies many advantages of Perl's malloc(), such as better usage of system resources, error detection, memory usage reporting, catchable failure of memory allocations, etc.
Directory under which core modules are to be installed.
Defaults to $Config{installprefixexp}, falling back to $Config{installprefix}, $Config{prefixexp} or $Config{prefix} should $Config{installprefixexp} not exist.
Overridden by PREFIX.
Use this instead of $(PERL) when you wish to run perl. It will set up extra necessary flags for you.
Use this instead of $(PERL) when you wish to run perl to work with modules. It will add things like -I$(INST_ARCH) and other necessary flags so perl can see the modules you're about to install.
Directory containing the Perl source code (use of this should be avoided, it may be undefined)
Desired permission for directories. Defaults to 755.
Desired permission for read/writable files. Defaults to 644.
Desired permission for executable files. Defaults to 755.
MakeMaker can run programs to generate files for you at build time. By default any file named *.PL (except Makefile.PL and Build.PL) in the top level directory will be assumed to be a Perl program and run passing its own basename in as an argument. This basename is actually a build target, and there is an intention, but not a requirement, that the *.PL file make the file passed to to as an argument. For example...
perl foo.PL foo
This behavior can be overridden by supplying your own set of files to search. PL_FILES accepts a hash ref, the key being the file to run and the value is passed in as the first argument when the PL file is run.
PL_FILES => {'bin/foobar.PL' => 'bin/foobar'}
PL_FILES => {'foo.PL' => 'foo.c'}
Would run bin/foobar.PL like this:
perl bin/foobar.PL bin/foobar
If multiple files from one program are desired an array ref can be used.
PL_FILES => {'bin/foobar.PL' => [qw(bin/foobar1 bin/foobar2)]}
In this case the program will be run multiple times using each target file.
perl bin/foobar.PL bin/foobar1
perl bin/foobar.PL bin/foobar2
PL files are normally run after pm_to_blib and include INST_LIB and INST_ARCH in their @INC, so the just built modules can be accessed... unless the PL file is making a module (or anything else in PM) in which case it is run before pm_to_blib and does not include INST_LIB and INST_ARCH in its @INC. This apparently odd behavior is there for backwards compatibility (and it's somewhat DWIM). The argument passed to the .PL is set up as a target to build in the Makefile. In other sections such as postamble you can specify a dependency on the filename/argument that the .PL is supposed (or will have, now that that is is a dependency) to generate. Note the file to be generated will still be generated and the .PL will still run even without an explicit dependency created by you, since the all target still depends on running all eligible to run.PL files.
Hashref of .pm files and *.pl files to be installed. e.g.
{'name_of_file.pm' => '$(INST_LIB)/install_as.pm'}
By default this will include *.pm and *.pl and the files found in the PMLIBDIRS directories. Defining PM in the Makefile.PL will override PMLIBDIRS.
Ref to array of subdirectories containing library files. Defaults to [ 'lib', $(BASEEXT) ]. The directories will be scanned and any files they contain will be installed in the corresponding location in the library. A libscan() method can be used to alter the behaviour. Defining PM in the Makefile.PL will override PMLIBDIRS.
(Where BASEEXT is the last component of NAME.)
A filter program, in the traditional Unix sense (input from stdin, output to stdout) that is passed on each .pm file during the build (in the pm_to_blib() phase). It is empty by default, meaning no filtering is done. You could use:
PM_FILTER => 'perl -ne "print unless /^\\#/"',
to remove all the leading comments on the fly during the build. In order to be as portable as possible, please consider using a Perl one-liner rather than Unix (or other) utilities, as above. The # is escaped for the Makefile, since what is going to be generated will then be:
PM_FILTER = perl -ne "print unless /^\#/"
Without the \ before the #, we'd have the start of a Makefile comment, and the macro would be incorrectly defined.
You will almost certainly be better off using the PL_FILES system, instead. See above, or the ExtUtils::MakeMaker::FAQ entry.
Release 5.005 grandfathered old global symbol names by providing preprocessor macros for extension source compatibility. As of release 5.6, these preprocessor definitions are not available by default. The POLLUTE flag specifies that the old names should still be defined:
perl Makefile.PL POLLUTE=1
Please inform the module author if this is necessary to successfully install a module under 5.6 or later.
Name of the executable used to run PPM_INSTALL_SCRIPT below. (e.g. perl)
Name of the script that gets executed by the Perl Package Manager after the installation of a package.
Name of the executable used to run PPM_UNINSTALL_SCRIPT below. (e.g. perl)
Name of the script that gets executed by the Perl Package Manager before the removal of a package.
This overrides all the default install locations. Man pages, libraries, scripts, etc... MakeMaker will try to make an educated guess about where to place things under the new PREFIX based on your Config defaults. Failing that, it will fall back to a structure which should be sensible for your platform.
If you specify LIB or any INSTALL* variables they will not be affected by the PREFIX.
Bool. If this parameter is true, failing to have the required modules (or the right versions thereof) will be fatal. perl Makefile.PL will die instead of simply informing the user of the missing dependencies.
It is extremely rare to have to use PREREQ_FATAL. Its use by module authors is strongly discouraged and should never be used lightly.
For dependencies that are required in order to run Makefile.PL, see CONFIGURE_REQUIRES.
Module installation tools have ways of resolving unmet dependencies but to do that they need a Makefile. Using PREREQ_FATAL breaks this. That's bad.
Assuming you have good test coverage, your tests should fail with missing dependencies informing the user more strongly that something is wrong. You can write a t/00compile.t test which will simply check that your code compiles and stop "make test" prematurely if it doesn't. See "BAIL_OUT" in Test::More for more details.
A hash of modules that are needed to run your module. The keys are the module names ie. Test::More, and the minimum version is the value. If the required version number is 0 any version will do. The versions given may be a Perl v-string (see version) or a range (see CPAN::Meta::Requirements).
This will go into the requires field of your META.yml and the runtime of the prereqs field of your META.json.
PREREQ_PM => {
# Require Test::More at least 0.47
"Test::More" => "0.47",
# Require any version of Acme::Buffy
"Acme::Buffy" => 0,
}
Bool. If this parameter is true, the prerequisites will be printed to stdout and MakeMaker will exit. The output format is an evalable hash ref.
$PREREQ_PM = {
'A::B' => Vers1,
'C::D' => Vers2,
...
};
If a distribution defines a minimal required perl version, this is added to the output as an additional line of the form:
$MIN_PERL_VERSION = '5.008001';
If BUILD_REQUIRES is not empty, it will be dumped as $BUILD_REQUIRES hashref.
RedHatism for PREREQ_PRINT. The output format is different, though:
perl(A::B)>=Vers1 perl(C::D)>=Vers2 ...
A minimal required perl version, if present, will look like this:
perl(perl)>=5.008001
Like PERLPREFIX, but only for the site install locations.
Defaults to $Config{siteprefixexp}. Perls prior to 5.6.0 didn't have an explicit siteprefix in the Config. In those cases $Config{installprefix} will be used.
Overridable by PREFIX
When true, perform the generation and addition to the MANIFEST of the SIGNATURE file in the distdir during 'make distdir', via 'cpansign -s'.
Note that you need to install the Module::Signature module to perform this operation.
Defaults to false.
Arrayref. E.g. [qw(name1 name2)] skip (do not write) sections of the Makefile. Caution! Do not use the SKIP attribute for the negligible speedup. It may seriously damage the resulting Makefile. Only use it if you really need it.
Available in version 6.64 and above.
A hash of modules that are needed to test your module but not run or build it.
This will go into the build_requires field of your META.yml and the test of the prereqs field of your META.json.
The format is the same as PREREQ_PM.
Ref to array of typemap file names. Use this when the typemaps are in some directory other than the current directory or when they are not named typemap. The last typemap in the list takes precedence. A typemap in the current directory has highest precedence, even if it isn't listed in TYPEMAPS. The default system typemap has lowest precedence.
Like PERLPREFIX, but only for the vendor install locations.
Defaults to $Config{vendorprefixexp}.
Overridable by PREFIX
If true, make install will be verbose
Your version number for distributing the package. This defaults to 0.1.
Instead of specifying the VERSION in the Makefile.PL you can let MakeMaker parse a file to determine the version number. The parsing routine requires that the file named by VERSION_FROM contains one single line to compute the version number. The first line in the file that contains something like a $VERSION assignment or package Name VERSION will be used. The following lines will be parsed o.k.:
# Good
package Foo::Bar 1.23; # 1.23
$VERSION = '1.00'; # 1.00
*VERSION = \'1.01'; # 1.01
($VERSION) = q$Revision$ =~ /(\d+)/g; # The digits in $Revision$
$FOO::VERSION = '1.10'; # 1.10
*FOO::VERSION = \'1.11'; # 1.11
but these will fail:
# Bad
my $VERSION = '1.01';
local $VERSION = '1.02';
local $FOO::VERSION = '1.30';
(Putting my or local on the preceding line will work o.k.)
"Version strings" are incompatible and should not be used.
# Bad
$VERSION = 1.2.3;
$VERSION = v1.2.3;
version objects are fine. As of MakeMaker 6.35 version.pm will be automatically loaded, but you must declare the dependency on version.pm. For compatibility with older MakeMaker you should load on the same line as $VERSION is declared.
# All on one line
use version; our $VERSION = qv(1.2.3);
The file named in VERSION_FROM is not added as a dependency to Makefile. This is not really correct, but it would be a major pain during development to have to rewrite the Makefile for any smallish change in that file. If you want to make sure that the Makefile contains the correct VERSION macro after any change of the file, you would have to do something like
depend => { Makefile => '$(VERSION_FROM)' }
See attribute depend below.
A sanitized VERSION with . replaced by _. For places where . has special meaning (some filesystems, RCS labels, etc...)
Hashref of .xs files. MakeMaker will default this. e.g.
{'name_of_file.xs' => 'name_of_file.c'}
The .c files will automatically be included in the list of files deleted by a make clean.
Hashref with options controlling the operation of XSMULTI:
{
xs => {
all => {
# options applying to all .xs files for this distribution
},
'lib/Class/Name/File' => { # specifically for this file
DEFINE => '-Dfunktastic', # defines for only this file
INC => "-I$funkyliblocation", # include flags for only this file
# OBJECT => 'lib/Class/Name/File$(OBJ_EXT)', # default
LDFROM => "lib/Class/Name/File\$(OBJ_EXT) $otherfile\$(OBJ_EXT)", # what's linked
},
},
}
Note xs is the file-extension. More possibilities may arise in the future. Note that object names are specified without their XS extension.
LDFROM defaults to the same as OBJECT. OBJECT defaults to, for XSMULTI, just the XS filename with the extension replaced with the compiler-specific object-file extension.
The distinction between OBJECT and LDFROM: OBJECT is the make target, so make will try to build it. However, LDFROM is what will actually be linked together to make the shared object or static library (SO/SL), so if you override it, make sure it includes what you want to make the final SO/SL, almost certainly including the XS basename with $(OBJ_EXT) appended.
When this is set to 1, multiple XS files may be placed under lib/ next to their corresponding *.pm files (this is essential for compiling with the correct VERSION values). This feature should be considered experimental, and details of it may change.
This feature was inspired by, and small portions of code copied from, ExtUtils::MakeMaker::BigHelper. Hopefully this feature will render that module mainly obsolete.
String of options to pass to xsubpp. This might include -C++ or -extern. Do not include typemaps here; the TYPEMAP parameter exists for that purpose.
May be set to -protoypes, -noprototypes or the empty string. The empty string is equivalent to the xsubpp default, or -noprototypes. See the xsubpp documentation for details. MakeMaker defaults to the empty string.
Your version number for the .xs file of this package. This defaults to the value of the VERSION attribute.
can be used to pass parameters to the methods which implement that part of the Makefile. Parameters are specified as a hash ref but are passed to the method as a hash.
{FILES => "*.xyz foo"}
{ANY_TARGET => ANY_DEPENDENCY, ...}
(ANY_TARGET must not be given a double-colon rule by MakeMaker.)
{TARFLAGS => 'cvfF', COMPRESS => 'gzip', SUFFIX => '.gz',
SHAR => 'shar -m', DIST_CP => 'ln', ZIP => '/bin/zip',
ZIPFLAGS => '-rl', DIST_DEFAULT => 'private tardist' }
If you specify COMPRESS, then SUFFIX should also be altered, as it is needed to tell make the target file of the compression. Setting DIST_CP to ln can be useful, if you need to preserve the timestamps on your files. DIST_CP can take the values 'cp', which copies the file, 'ln', which links the file, and 'best' which copies symbolic links and links the rest. Default is 'best'.
{ARMAYBE => 'ar', OTHERLDFLAGS => '...', INST_DYNAMIC_DEP => '...'}
{LINKTYPE => 'static', 'dynamic' or ''}
NB: Extensions that have nothing but *.pm files had to say
{LINKTYPE => ''}
with Pre-5.0 MakeMakers. Since version 5.00 of MakeMaker such a line can be deleted safely. MakeMaker recognizes when there's nothing to be linked.
{ANY_MACRO => ANY_VALUE, ...}
Anything put here will be passed to MY::postamble() if you have one.
{FILES => '$(INST_ARCHAUTODIR)/*.xyz'}
Specify the targets for testing.
{TESTS => 't/*.t'}
RECURSIVE_TEST_FILES can be used to include all directories recursively under t that contain .t files. It will be ignored if you provide your own TESTS attribute, defaults to false.
{RECURSIVE_TEST_FILES=>1}
{MAXLEN => 8}
If you cannot achieve the desired Makefile behaviour by specifying attributes you may define private subroutines in the Makefile.PL. Each subroutine returns the text it wishes to have written to the Makefile. To override a section of the Makefile you can either say:
sub MY::c_o { "new literal text" }
or you can edit the default by saying something like:
package MY; # so that "SUPER" works right
sub c_o {
my $inherited = shift->SUPER::c_o(@_);
$inherited =~ s/old text/new text/;
$inherited;
}
If you are running experiments with embedding perl as a library into other applications, you might find MakeMaker is not sufficient. You'd better have a look at ExtUtils::Embed which is a collection of utilities for embedding.
If you still need a different solution, try to develop another subroutine that fits your needs and submit the diffs to makemaker@perl.org
For a complete description of all MakeMaker methods see ExtUtils::MM_Unix.
Here is a simple example of how to add a new target to the generated Makefile:
sub MY::postamble {
return <<'MAKE_FRAG';
$(MYEXTLIB): sdbm/Makefile
cd sdbm && $(MAKE) all
MAKE_FRAG
}
WriteMakefile() now does some basic sanity checks on its parameters to protect against typos and malformatted values. This means some things which happened to work in the past will now throw warnings and possibly produce internal errors.
Some of the most common mistakes:
MAN3PODS => ' '
This is commonly used to suppress the creation of man pages. MAN3PODS takes a hash ref not a string, but the above worked by accident in old versions of MakeMaker.
The correct code is MAN3PODS => { }.
MakeMaker.pm uses the architecture-specific information from Config.pm. In addition it evaluates architecture specific hints files in a hints/ directory. The hints files are expected to be named like their counterparts in PERL_SRC/hints, but with an .pl file name extension (eg. next_3_2.pl). They are simply evaled by MakeMaker within the WriteMakefile() subroutine, and can be used to execute commands as well as to include special variables. The rules which hintsfile is chosen are the same as in Configure.
The hintsfile is eval()ed immediately after the arguments given to WriteMakefile are stuffed into a hash reference $self but before this reference becomes blessed. So if you want to do the equivalent to override or create an attribute you would say something like
$self->{LIBS} = ['-ldbm -lucb -lc'];
For authors of extensions MakeMaker provides several Makefile targets. Most of the support comes from the ExtUtils::Manifest module, where additional documentation can be found.
reports which files are below the build directory but not in the MANIFEST file and vice versa. (See ExtUtils::Manifest::fullcheck() for details)
reports which files are skipped due to the entries in the MANIFEST.SKIP file (See ExtUtils::Manifest::skipcheck() for details)
does a realclean first and then the distcheck. Note that this is not needed to build a new distribution as long as you are sure that the MANIFEST file is ok.
does a realclean first and then removes backup files such as *~, *.bak, *.old and *.orig
rewrites the MANIFEST file, adding all remaining files found (See ExtUtils::Manifest::mkmanifest() for details)
Copies all the files that are in the MANIFEST file to a newly created directory with the name $(DISTNAME)-$(VERSION). If that directory exists, it will be removed first.
Additionally, it will create META.yml and META.json module meta-data file in the distdir and add this to the distdir's MANIFEST. You can shut this behavior off with the NO_META flag.
Makes a distdir first, and runs a perl Makefile.PL, a make, and a make test in that directory.
First does a distdir. Then a command $(PREOP) which defaults to a null command, followed by $(TO_UNIX), which defaults to a null command under UNIX, and will convert files in distribution directory to UNIX format otherwise. Next it runs tar on that directory into a tarfile and deletes the directory. Finishes with a command $(POSTOP) which defaults to a null command.
Defaults to $(DIST_DEFAULT) which in turn defaults to tardist.
Runs a tardist first and uuencodes the tarfile.
First does a distdir. Then a command $(PREOP) which defaults to a null command. Next it runs shar on that directory into a sharfile and deletes the intermediate directory again. Finishes with a command $(POSTOP) which defaults to a null command. Note: For shdist to work properly a shar program that can handle directories is mandatory.
First does a distdir. Then a command $(PREOP) which defaults to a null command. Runs $(ZIP) $(ZIPFLAGS) on that directory into a zipfile. Then deletes that directory. Finishes with a command $(POSTOP) which defaults to a null command.
Does a $(CI) and a $(RCS_LABEL) on all files in the MANIFEST file.
Customization of the dist targets can be done by specifying a hash reference to the dist attribute of the WriteMakefile call. The following parameters are recognized:
CI ('ci -u')
COMPRESS ('gzip --best')
POSTOP ('@ :')
PREOP ('@ :')
TO_UNIX (depends on the system)
RCS_LABEL ('rcs -q -Nv$(VERSION_SYM):')
SHAR ('shar')
SUFFIX ('.gz')
TAR ('tar')
TARFLAGS ('cvf')
ZIP ('zip')
ZIPFLAGS ('-r')
An example:
WriteMakefile(
...other options...
dist => {
COMPRESS => "bzip2",
SUFFIX => ".bz2"
}
);
Long plaguing users of MakeMaker based modules has been the problem of getting basic information about the module out of the sources without running the Makefile.PL and doing a bunch of messy heuristics on the resulting Makefile. Over the years, it has become standard to keep this information in one or more CPAN Meta files distributed with each distribution.
The original format of CPAN Meta files was YAML and the corresponding file was called META.yml. In 2010, version 2 of the CPAN::Meta::Spec was released, which mandates JSON format for the metadata in order to overcome certain compatibility issues between YAML serializers and to avoid breaking older clients unable to handle a new version of the spec. The CPAN::Meta library is now standard for accessing old and new-style Meta files.
If CPAN::Meta is installed, MakeMaker will automatically generate META.json and META.yml files for you and add them to your MANIFEST as part of the 'distdir' target (and thus the 'dist' target). This is intended to seamlessly and rapidly populate CPAN with module meta-data. If you wish to shut this feature off, set the NO_META WriteMakefile() flag to true.
At the 2008 QA Hackathon in Oslo, Perl module toolchain maintainers agreed to use the CPAN Meta format to communicate post-configuration requirements between toolchain components. These files, MYMETA.json and MYMETA.yml, are generated when Makefile.PL generates a Makefile (if CPAN::Meta is installed). Clients like CPAN or CPANPLUS will read these files to see what prerequisites must be fulfilled before building or testing the distribution. If you wish to shut this feature off, set the NO_MYMETA WriteMakeFile() flag to true.
If some events detected in Makefile.PL imply that there is no way to create the Module, but this is a normal state of things, then you can create a Makefile which does nothing, but succeeds on all the "usual" build targets. To do so, use
use ExtUtils::MakeMaker qw(WriteEmptyMakefile);
WriteEmptyMakefile();
instead of WriteMakefile().
This may be useful if other modules expect this module to be built OK, as opposed to work OK (say, this system-dependent module builds in a subdirectory of some other distribution, or is listed as a dependency in a CPAN::Bundle, but the functionality is supported by different means on the current architecture).
my $value = prompt($message);
my $value = prompt($message, $default);
The prompt() function provides an easy way to request user input used to write a makefile. It displays the $message as a prompt for input. If a $default is provided it will be used as a default. The function returns the $value selected by the user.
If prompt() detects that it is not running interactively and there is nothing on STDIN or if the PERL_MM_USE_DEFAULT environment variable is set to true, the $default will be used without prompting. This prevents automated processes from blocking on user input.
If no $default is provided an empty string will be used instead.
Please note that while this module works on Perl 5.6, it is no longer being routinely tested on 5.6 - the earliest Perl version being routinely tested, and expressly supported, is 5.8.1. However, patches to repair any breakage on 5.6 are still being accepted.
Command line options used by MakeMaker->new(), and thus by WriteMakefile(). The string is split as the shell would, and the result is processed before any actual command line arguments are processed.
PERL_MM_OPT='CCFLAGS="-Wl,-rpath -Wl,/foo/bar/lib" LIBS="-lwibble -lwobble"'
If set to a true value then MakeMaker's prompt function will always return the default without waiting for user input.
Same as the PERL_CORE parameter. The parameter overrides this.
Module::Build is a pure-Perl alternative to MakeMaker which does not rely on make or any other external utility. It is easier to extend to suit your needs.
Module::Install is a wrapper around MakeMaker which adds features not normally available.
Dist::Zilla makes it easy for the module author to create MakeMaker-based distributions with lots of bells and whistles.
Andy Dougherty doughera@lafayette.edu, Andreas König andreas.koenig@mind.de, Tim Bunce timb@cpan.org. VMS support by Charles Bailey bailey@newman.upenn.edu. OS/2 support by Ilya Zakharevich ilya@math.ohio-state.edu.
Currently maintained by Michael G Schwern schwern@pobox.com
Send patches and ideas to makemaker@perl.org.
Send bug reports via http://rt.cpan.org/. Please send your generated Makefile along with your report.
For more up-to-date information, see https://metacpan.org/release/ExtUtils-MakeMaker.
Repository available at https://github.com/Perl-Toolchain-Gang/ExtUtils-MakeMaker.
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. |
microsoftml.rx_ensemble: Combine models into a single one
Usage
microsoftml.rx_ensemble(formula: str,
data: [<class ‘revoscalepy.datasource.RxDataSource.RxDataSource’>,
<class ‘pandas.core.frame.DataFrame’>, <class ‘list’>],
trainers: typing.List[microsoftml.modules.base_learner.BaseLearner],
method: str = None, model_count: int = None,
random_seed: int = None, replace: bool = False,
samp_rate: float = None, combine_method: [‘Average’, ‘Median’,
‘Vote’] = ‘Median’, max_calibration: int = 100000,
split_data: bool = False, ml_transforms: list = None,
ml_transform_vars: list = None, row_selection: str = None,
transforms: dict = None, transform_objects: dict = None,
transform_function: str = None,
transform_variables: list = None,
transform_packages: list = None,
transform_environment: dict = None, blocks_per_read: int = None,
report_progress: int = None, verbose: int = 1,
compute_context: revoscalepy.computecontext.RxComputeContext.RxComputeContext = None)
Description
Train an ensemble of models.
Details
rx_ensemble is a function that trains a number of modelsof various kinds to obtain better predictive performance than could beobtained from a single model.
Arguments
formula
A symbolic or mathematical formula in valid Python syntax,enclosed in double quotes. A symbolic formula might reference objects in thedata source, such as "creditScore ~ yearsEmploy".Interaction terms (creditScore * yearsEmploy) andexpressions (creditScore == 1) are not currently supported.
data
A data source object or a character string specifying a .xdffile or a data frame object. Alternatively, it can be a list of data sourcesindicating each model should be trained using one of the data sources in the list.In this case, the length of the data list must be equal to model_count.
trainers
A list of trainers with their arguments. The trainers arecreated by using FastTrees, FastForest, FastLinear,LogisticRegression, NeuralNetwork, or OneClassSvm.
method
A character string that specifies the type of ensemble:"anomaly" for Anomaly Detection, "binary" for Binary Classification,multiClass for Multiclass Classification, or "regression" for Regression.
random_seed
Specifies the random seed. The default value is None.
model_count
Specifies the number of models to train. If this number is greaterthan the length of the trainers list, the trainers list is duplicated to match model_count.
replace
A logical value specifying if the sampling of observations should be donewith or without replacement. The default value is False.
samp_rate
A scalar of positive value specifying the percentage of observations to sample foreach trainer. The default is 1.0 for sampling with replacement (i.e., replace=True) and 0.632for sampling without replacement (i.e., replace=False). When split_data is True, the default ofsamp_rate is 1.0 (no sampling is done before splitting).
split_data
A logical value specifying whether or not to train the base models on non-overlapping partitions.The default is False. It is available only for RxSpark compute context and ignored for others.
combine_method
Specifies the method used to combine the models:
"Median": to compute the median of the individual model outputs,
"Average": to compute the average of the individual model outputs and
"Vote": to compute (pos-neg) / the total number of models, where ‘pos’is the number of positive outputs and ‘neg’ is the number of negative outputs.
max_calibration
Specifies the maximum number of examples to use for calibration. This argument is ignored for all tasks other than binary classification.
ml_transforms
Specifies a list of MicrosoftML transforms to beperformed on the data before training or None if no transforms areto be performed. Transforms that require an additional pass over the data(such as featurize_text, categorical are not allowed.These transformations are performed after any specified R transformations.The default value is None.
ml_transform_vars
Specifies a character vector of variable namesto be used in ml_transforms or None if none are to be used.The default value is None.
row_selection
NOT SUPPORTED. Specifies the rows (observations) from the data set that are to be used by the model with the name of a logical variable from the data set (in quotes) or with a logical expression using variables in the data set. For example:
rowSelection = "old"will only use observations in which the value of the variableoldisTrue.
rowSelection = (age > 20) & (age < 65) & (log(income) > 10)only uses observations in which the value of theagevariable is between 20 and 65 and the value of thelogof theincomevariable is greater than 10.
The row selection is performed after processing any datatransformations (see the arguments transforms ortransform_func). As with all expressions, row_selection can bedefined outside of the function call using the expressionfunction.
transforms
NOT SUPPORTED. An expression of the form that representsthe first round of variable transformations. As withall expressions, transforms (or row_selection) can be definedoutside of the function call using the expression function.
transform_objects
NOT SUPPORTED. A named list that contains objects that can bereferenced by transforms, transform_function, androw_selection.
transform_function
The variable transformation function.
transform_variables
A character vector of input data set variables needed for the transformation function.
transform_packages
NOT SUPPORTED. A character vector specifying additional Python packages(outside of those specified in RxOptions.get_option("transform_packages")) tobe made available and preloaded for use in variable transformation functions.For example, those explicitly defined in revoscalepy functions viatheir transforms and transform_function arguments or those definedimplicitly via their formula or row_selection arguments. Thetransform_packages argument may also be None, indicating thatno packages outside RxOptions.get_option("transform_packages") are preloaded.
transform_environment
NOT SUPPORTED. A user-defined environment to serve as a parent to allenvironments developed internally and used for variable data transformation.If transform_environment = None, a new “hash” environment with parentrevoscalepy.baseenv is used instead.
blocks_per_read
Specifies the number of blocks to read for each chunk of data read from the data source.
report_progress
An integer value that specifies the level of reporting on the row processing progress:
0: no progress is reported.
1: the number of processed rows is printed and updated.
2: rows processed and timings are reported.
3: rows processed and all timings are reported.
verbose
An integer value that specifies the amount of output wanted.If 0, no verbose output is printed during calculations. Integervalues from 1 to 4 provide increasing amounts of information.
compute_context
Sets the context in which computations are executed,specified with a valid revoscalepy.RxComputeContext.Currently local and revoscalepy.RxSpark compute contextsare supported. When revoscalepy.RxSpark is specified,the training of the models is done in a distributed way, and the ensemblingis done locally. Note that the compute context cannot be non-waiting.
Returns
A rx_ensemble object with the trained ensemble model. |
Friday, January 11, 2019
keep vagrant box time in-sync:
vb.customize [ “guestproperty”, “set”, :id, “/VirtualBox/GuestAdd/VBoxService/--timesync-set-threshold”, 10000 ]
local docker registry for single node kubernetes:
$ docker image pull registry:2
$ docker container run -d -p 5000:5000 --name registry registry:2
$ docker image build -t myapp .
$ docker image tag myapp localhost:5000/myapp
$ docker image push localhost:5000/myapp
$ kubectl create deployment myapp --image=localhost:5000/myapp
Tuesday, January 15, 2019
two interesting command line tools:
peco/peco: Simplistic interactive filtering tool: like grep but with filtering
kevinschoon/pomo: Pomodoro CLI: task management systems
another news aggregator: DevURLs
reading an interesting book: Vertically Integrated Architectures: Versioned Data Models, Implicit Services, and Persistence-Aware Programming
Tuesday, January 22, 2019
initialize a new module, this creates a go.mod file:
$ go mod init github.com/you/hello
go build will fetch and add dependencies to go.mod, no go get required.
common usage:
go list -m all— View final versions that will be used in a build for all direct and indirect dependencies (details)
go get -u— Update all direct and indirect dependencies to latest minor or patch upgrades (details)
go mod tidy— Prune any no-longer-needed dependencies from go.mod and add any dependencies needed for other combinations of OS, architecture, and build tags (details)
replacedirective orgohack— Use a fork, local copy or exact version of a dependency (details)
go mod vendor— Optional step to create a vendor directory (details)
run node.js code in jupyter notebook, use this module: pixiedust/pixiedust
it doesn't create a new node.js kernel, but instead it uses python kernel and pass %%node block codes to node runtime, and allows node.js variables copied as python variables
this way is much better than using a node.js kernel.
for custom node_modules folder, change PIXIEDUST_HOME environment variable (it still creates a node folder inside it, so have to make soft link to node_modules)
I'm not a .NET developer, but I listened to Software Engineering Radio Episode 348: Riccardo Terrell on Concurrency, decided to give it a try
Thursday, January 24, 2019
joined safari online training Python Data Handling - A Deeper Dive
some notes:
class Stock(object):
__slots__ = ('name', 'shares', 'price')
def __init__(self, name, shares, price)
self.name = name
self.shares = shares
self.price = price
from collections import namedtuple
Stock = namedtuple('Stock', ['name', 'shares', 'price'])
it creates a class that you use to make instances:
s = Stock('GOOG', 100, 490.1)
s.name
import tracemalloc
tracemalloc.start()
// do something here
print(tracemalloc.get_traced_memory())
Monday, January 28, 2019
found a good website on mysql stuffs: Mydbops – Scaling database Operations
another one is for golang: Golang – Jexia
some random links from twitter:
Understanding The Memcached Source Code — Event Driven I
Practical strace: Retrofitting Build Caching
Eliminate error handling by eliminating errors
HTTP/3: From root to tip
Garbage Collection in Redux Applications
Our Software Dependency Problem
reading Deep Learning with PyTorch, I found PyTorch is little bit easier to understand than tensorflow
and Practical Deep Learning for Coders 2019 also uses PyTorch
also reading Deep Learning and the Game of Go
in order to get a better foundation, reading Good Math: A Geek's Guide to the Beauty of Numbers, Logic, and Computation as well
Thursday, January 31, 2019
create thumbnails for many photos:
$ mkdir thumbs$ mogrify -path thumbs/ -thumbnail 500x500 *.jpg
vim, create new file under nerdtree:
toggle nerdtree
cursor on target directory, press mto toggle menu
press ato create new node (file)
Blog Archives
Older Entries
2018 December
2018 November
2018 October
2018 September
2018 August
2018 July
2018 June
2018 May
2018 April
2018 March
2018 February
2018 January
2017 December
2017 November
2017 October
2017 September
2017 August
2017 July
2017 June
2017 May
2017 April
2017 March
2017 February
2017 January
2016 December
2016 November
2016 October
2016 September
2016 August
2016 July
2016 June
2016 May
2016 April
2016 March
2016 February
2016 January
2015 December
2015 November
2015 October
2015 September
2015 August
2015 July
2015 June
2015 May
2015 April
2015 March
2015 February
2015 January
2014 December
2014 November
2014 October
2014 September
2014 August
2014 March
2014 February
2014 January
2013 December
2013 October
2013 July
2013 June
2013 May
2013 March
2013 February
2013 January
2012 December
2012 November
2012 October
2012 September
2012 August |
porque estou tendo o erro File "", line 50 print ("Este é o valor da soma dos produtos entre as amostras e seus respectivos pesos", PiXi)" ^ para calcular a média ponderada com o conjunto de dados inseridos pelo usuário.
Por estar reciclando os Ãndices n e i , posso ter problemas para criar outros cálculos estatÃsticos como por exemplo Mediana e Média quadrática?
SyntaxError: EOL while scanning string literal # Média
#Ãndice referente ao número da amostra.
n=int(input("Insira o número total de amostras"))
#Ãndice i que vai percorrer todas as amostras
i=0
#Lista vazia para conter as amostras
Amostras=[]
#For para percorrer todo o contador das médias
for i in range(i,n):
#Adiciona o valor da amostra dentro da lista de amostras
Amostras.append(int(input("Insira o valor das amostras")))
#Escreve a lista de amostras na tela para o usuário
print(Amostras)
#Soma das amostras
Xi = 0
#Total
N = len(Amostras)
for amostrai in Amostras:
Xi +=amostrai
print ("Esse é o valor da soma das amostras", Xi)
print ("Este é a quantidade de amostras que você possui", N)
print ("Esta é a média", Xi/N)
# Média Ponderada
#Lista com os pesos de cada variável
pi = []
#Lista com os valores de cada variável
xi = []
#Lista com os produtos do peso e da variável associada ao Ãndice i
amostraponderadai = []
for i in range (i,n):
pi.append(int(input("Insira o peso associado a variável: ",i)))
xi.append(int(input("Insira a variável associado ao Ãndice:", i)))
amostraponderadai.append(pi(i)*xi(i))
#Soma das amostras
PiXi = 0
Nponderadai = len(amostraponderadai)
#Total do produto da amostra pelo peso
for pixi in amostraponderadai:
PiXi += pixi
print ("Este é o valor da soma dos produtos entre as amostras e seus respectivos pesos", PiXi)"
print ("Este é a quantidade de amostras para a análise da média ponderada", Nponderadai)
print ("Esta é a média ponderada do conjunto de dados": PiXi/Nponderadai)
|
Hope this is the correct category for my question. I’m trying to import platformio and use it in my Python project to build firmware.
I am doing something like:
from platformio.package.manager.platform import PlatformPackageManager
from platformio.platform.factory import PlatformFactory
from platformio.project.config import ProjectConfig
package_manager = PlatformPackageManager()
project_config = ProjectConfig(os.path.join("myproject", "platformio.ini")
project_config.items(env=self.env, as_dict=True)
package_manager.install(spec=platform)
platform = self.project_config.items(env="my_env", as_dict=True)["platform"]
factory = PlatformFactory.new(platform)
factory.run({"pioenv": "my_env", "project_config": "myproject"}, [], True, False, 1)
But I run into Error: BoardConfig: Board is not defined. If I run the same code in the project folder (making sure it is the current directory) it works.
I’m wondering if I am missing something or if there is a much better way of importing the project to use it as a library? I noticed PlatformFactory.run() only gives 0 or 1 for result but would be nice to capture errors as well.
Thank you for your time in advance! |
NewerOlder
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
.. include:: system.rst
.. _het_modular_jobs:
Heterogeneous and Cross-Module Jobs
===================================
.. _het_modular_jobs_overview:
Overview
--------
.. _het_modular_jobs_slurm:
Slurm Support for Heterogeneous Jobs
------------------------------------
For detailed information about Slurm, please take a look on the :ref:`Quick Introduction <quickintro>` and :ref:`Batch system <batchsystem>` page.
19
With Slurm 17.11 support for Heterogeneous Jobs was introduced. This allows to spawn a job across multiple partitions of a cluster, and across different Modules of our Supercomputers. See the official Slurm documentation (SlurmHetJob_) for additional information on this feature.
20 21 22 23 24 25 26 27 28 29 30 31 32 33 34
.. _SlurmHetJob: https://slurm.schedmd.com/heterogeneous_jobs.html
**salloc/srun**
.. code-block:: none
salloc -A <budget account> -p <batch, ...> : -p <booster, ...> [ : -p <booster, ...> ]
srun ./prog1 : ./prog2 [ : ./progN ]
**sbatch**
.. code-block:: none
#!/bin/bash
35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75
#SBATCH -p <batch, ...>
#SBATCH packjob
#SBATCH -p <booster, ...>
srun ./prog1 : ./prog2
.. _het_modular_jobs_software:
Loading Software in a Heterogeneous Environment
-----------------------------------------------
Executing applications in a modular environment, especially when different Modules have different architectures or the dependencies of programs are not uniform, can be a challenging tasks.
**Uniform Architecture and Dependencies**
As long as the Architecture of the given modules are uniform and there are not mutually exclusive dependencies for the binaries that are going to be executed, one can rely on the ``module`` command. Take a look on the :ref:`Quick Introduction <quickintro>` if ``module`` is new for you.
.. code-block:: none
#!/bin/bash -x
#SBATCH ...
module load [...]
srun ./prog1 : ./prog2
**Non Uniform Architectures and Mutual Exclusive Dependencies**
A tool called ``xenv`` was implement to ease the task of loading modules for heterogeneous jobs. For details on supported command line arguments, execute ``xenv -h`` on the given system.
.. code-block:: none
srun --account=<budget account> --partition=<batch, ...> xenv -L intel-para IMB-1 : --partition=<knl, ...> xenv -L Architecture/KNL intel-para IMB-1
.. ifconfig:: system_name == 'jureca'
.. _het_modular_jobs_mpi_bridges:
MPI Traffic Across Modules
--------------------------
When the nodes of a job belong to different interconnects and MPI communication is used, bridging has to take place. To support this workflow, e.g. run a job on a Cluster with Infiniband and a Booster with OmniPath, a Gateway Daemon (psgwd, ParaStation Gateway Daemon) was implemented that takes care of moving packages across fabrics.
76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99
Loading MPI
~~~~~~~~~~~
**JURECA Cluster**
Communication with the psgwd has to be ensured via loading the software module **pscom-gateway** either via ``xenv`` or the ``module`` command.
**JURECA Booster, Current MPI Workaround (April/May/... 2019)**
For the time being, prefixing JURECA **Booster** binaries via ``msa_fix_ld`` is necessary. This is due to the fact that the installed libmpi version does not support the psgwd. We hope this will go away soon.
``msa_fix_ld`` is modifying the environment, so it might influence the modules you load.
.. code-block:: none
#!/bin/bash
export PSP_PSM=1
export LD_LIBRARY_PATH="/usr/local/jsc/msa_parastation_mpi/lib:/usr/local/jsc/msa_parastation_mpi/lib/mpi-hpl-gcc/:${LD_LIBRARY_PATH}"
$*
Requesting Gateways
~~~~~~~~~~~~~~~~~~~
To request gateway nodes for a job, the mandatory option ``gw_num`` has to be specified at submit/allocation time.
100
101 102 103
104
105 106
107
108
109
.. code-block:: none
110
111
srun -A <budget account> -p <batch, ...> --gw_num=2 xenv [-L ...] -L pscom-gateway ./prog1 : -p <booster, ...> xenv [-L ...] msa_fix_ld ./prog2
112
113 114 115 116
When submitting a job that will run later, you have to specify the number of gateways at submit time:
.. code-block:: none
117 118 119 120
sbatch --gw_num=2 ./submit-script.sbatch
.. code-block:: none
121
#!/bin/bash
122 123 124 125 126
127
srun xenv [-L ...] -L pscom-gateway ./prog1 : xenv [-L ...] msa_fix_ld ./prog2
128 129
130 131
132 133 134 135
136 137 138 139
The psgw plugin for the ParaStation management daemon extends the Slurm commands salloc, srun and sbatch with the following options:
.. code-block:: none
140
--gw_num=number Number of gateway nodes
141
--gw_file=path of the routing file
142 143
--gw_plugin=string Name of the route plugin
144 145 146 147 148
A routing file will be generated in $HOME/psgw-route-$JOBID. With the option ``gw_file`` a user-defined absolute path for the generation of the routing file can be specified:
.. code-block:: none
srun --gw_file=custom-path-to-routing-file --gw_num=2 -N 1 -n 1 hostname : -N 2 -n 2 hostname
149
150 151
152
153 154 155 156
The routing of MPI traffic across the Gateway nodes is performed by the ParaStation Gateway daemon on a per-node-pair basis.
When a certain number of gateway nodes is requested, an instance of psgwd is launched on each gateway.
By default, given the list of Cluster and Booster nodes obtained at allocation time, the system assigns each one of the Cluster node - Booster node pair to one of the instances of psgwd previously launched.
This mapping between Cluster and Booster nodes is saved into the routing file and used for the routing of the MPI traffic across the gateway nodes.
157
158
159 160
161 162 163
.. code-block:: none
164
srun --gw_plugin=$HOME/custom-route-plugin --gw_num=2 -N 1 hostname : -N 2 hostname
165
166 167 168
The ``gw_plugin`` option accepts either a label for a plugin already installed on the system, either a path to a user-defined plugin.
Currently two plugins are available on the JURECA system:
169
170
* ``plugin01`` is the default plugin (used when the ``gw_file`` is not used).
171
* ``plugin02`` is better suited for applications that use point-to-point communication between the same pairs of processes between Cluster and Booster, especially when the number of gateway nodes used is low.
172
173
The plugin file must include the functions associating a gateway node to a cluster node - booster node pair.
174 175 176 177 178 179 180 181 182 183 184
As an example, the code for ``plugin01`` is reported here:
.. code-block:: python
# Route function: Given the numerical Ids of nodes in partition A and B, the function
# returns a tuple (error, numeral of gateway)
def routeConnectionS(sizePartA, sizePartB, numGwd, numeralNodeA, numeralNodeB):
numeralGw = (numeralNodeA + numeralNodeB) % numGwd
return None, numeralGw
185
# Route function (extended interface): Make decision based on names of nodes to
186 187 188 189 190 191 192 193
# take topology into account
# def routeConnectionX(nodeListPartA, nodeListPartB, gwList, nodeA, nodeB):
# return Exception("Not implemented"), gwList[0]
routeConnectionX = None
In the case of 2 Cluster nodes, 2 Booster nodes and 2 Gateway nodes, this function results in the following mapping:
194 195 196 197 198 199 200 201
202 203
204 205 206 207
PSGWD Gateway Assignment
++++++++++++++++++++++++
If more gateways were requested than available the slurmctld prologue will fail for a interactive jobs
208 209 210 211 212 213 214 215 216 217 218
.. code-block:: none
srun --gw_num=3 -N 1 hostname : -N 2 hostname
srun: psgw: requesting 3 gateway nodes
srun: job 158553 queued and waiting for resources
srun: job 158553 has been allocated resources
srun: PrologSlurmctld failed, job killed
srun: Force Terminated job 158553
srun: error: Job allocation 158553 has been revoked
219
If batch jobs run out of gateway resources they will be re-queued and have to wait for 10 minutes before becoming eligible to start again.
220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235
Debugging
~~~~~~~~~
For debugging purposes, and to make sure the gateways are used, you might use
.. code-block:: none
export PSP_DEBUG=3
You should see output like
.. code-block:: none
<PSP:r0000003:CONNECT (192.168.12.34,26708,0x2,r0000003) to (192.168.12.41,29538,0x2,r0000004) via gw>
<PSP:r0000004:ACCEPT (192.168.12.34,26708,0x2,r0000003) to (192.168.12.41,29538,0x2,r0000004) via gw>
236 237 238 239 240 241 242 243 244 245 246
JuRoPA3
~~~~~~~
Because JUROPA3 has only one high-speed interconnect, using the ``psgwd`` is only possible using ``PSP_GATEWAY=2``. Via exporting this variable the Gateway protocols priority is boosted over the default interconnect.
.. code-block:: none
export PSP_GATEWAY=2
srun -A <budget account> -p <cluster, ...> --gw_num=2 xenv -L pscom-gateway ./prog1 : -p <booster, ...> xenv -L pscom-gateway ./prog2
JuRoPA3 has 4 Gateways available.
|
A Python Tutorial, the Basics
ð A very easy Python Tutorial! ð
#Tutorial Jam
@elipie's jam p i n g
p i n g
Here is a basic tutorial for Python, for beginners!
Table of Contents:
1. The developer of python
2. Comments/Hashtags
3. Print and input statements
f' strings
4. If, Elif, Else statements
5. Common Modules
1. Developer of Python
It was created in the late 1980s by Guido van Rossum in the Netherlands. It was made as a successor to the ABC language, capable interfacing with the Ameoba operating system. It's name is python because when he was thinking about Python, he was also reading, 'Monty Python's Flying Circus'. Guido van Rossum thought that the langauge would need a short, unique name, so he chose Python.
For more about Guido van Rossum, click here
2. Comments/Hastags
Comments are side notes you can write in python. They can be used, as I said before:
sidenotes
instructions or steps
etc.
How to write comments:
#This is a comment
The output is nothing because:
It is a comment and comments are invisible to the computer
Comments are not printed in Python
So just to make sure, hastags are used to make comments. And remember, comments are ignored by the computer.
3. Print and Input statements
1. Print Statements
Print statements, printed as print, are statements used to print sentences or words. So for example:
print("Hello World!")
The output would be:
Hello World!
So you can see that the print statement is used to print words or sentences.
2. Input Statements
Input statements, printed as input, are statements used to 'ask'. For example:
input("What is your name?")
The output would be:
What is your name?
However, with inputs, you can write in them. You can also 'name' the input. Like this:
name = input("What is your name?")
You could respond by doing this:
What is your name? JBYT27
So pretty much, inputs are used to make a value that you can make later.
Then you could add a if statement, but lets discuss that later.
3. f strings
f strings, printed as f(before a qoutation), is used to print or input a value already used. So what I mean is, say I put an f string on a print statement. Like this:
print(f"")
The output right now, is nothing. You didn't print anything. But say you add this:
print(f"Hello {name}!")
It would work, only if the name was named. In other words, say you had a input before and you did this to it:
name = input()
Then the f string would work. Say for the input, you put in your name. Then when the print statement would print:
Hello (whatever your name was)!
Another way you could do this is with commas. This won't use an f string either. They are also simaler. So how you would print it is like this:
name = input()
...
print("Hello ", name, "!")
The output would be the same as well! The commas seperate the 2 strings and they add the name inside. But JBYT27, why not a plus sign? Well, this question you would have to ask Guido van Rossum, but I guess I can answer it a bit. It's really the python syntax. The syntax was made that so when you did a plus sign, not a comma, it would give you an error.
Really, the only time you would use this is to give back your name, or to find if one value was equal to each other, which we'll learn in a sec.
4. If, Elif, Else Statements
1. If Statements
If statements, printed as if, are literally as they are called, if sentences. They see if a sentence equals or is something to an object, it creates an effect. You could think an if statement as a cause and effect. An example of a if statement is:
name = input("What is your name?")
#asking for name
if name == "JBYT27":
print("Hello Administrator!")
The output could be:
What is your name? JBYT27Hello Administrator!
However, say it isn't JBYT27. This is where the else, elif, try, and except statements comes in!
2. Elif Statements
Elif statements, printed as elif are pretty much if statements. It's just that the word else and if are combined. So say you wanted to add more if statements. Then you would do this:
if name == "JBYT27":
print("Hello Administrator!")
elif name == "Code":
print("Hello Code!")
It's just adding more if statements, just adding a else to it!
3. Else Statements
Else statments, printed as else, are like if and elif statements. They are used to tell the computer that if something is not that and it's not that, go to this other result. You can use it like this (following up from the other upper code):
if name == "JBYT27":
print("Hello admin!")
elif name == "Squid":
print("Hello Lord Squod!")
else:
print(f"Hello {name}!")
5. Common Modules
Common modules include:
os
time
math
sys
replit
turtle
tkinter
random
etc.
So all these modules that I listed, i'll tell you how to use, step by step! ;) But wait, what are modules?
Modules are like packages that are pre-installed in python. You just have to fully install it, which is the module(please correct me if im wrong). So like this code:
import os
...
When you do this, you successfully import the os module! But wait, what can you do with it? The most common way people use the os module is to clear the page. By means, it clears the console(the black part) so it makes your screen clear-er. But, since there are many, many, many modules, you can also clear the screen using the replit module. The code is like this:
import replit
...
replit.clear()
But one amazing thing about this importing is you can make things specific. Like say you only want to import pi and sqrt from the math package. This is the code:
from math import pi, sqrt
Let me mention that when you do this, never, ever add an and. Like from ... import ... and .... That is just horrible and stupid and... Just don't do it :)
Next is the time module
You can use the time module for:
time delay
scroll text
And yeah, that's pretty much it (i think)
Note:
All of the import syntax is the same except for the names
Next is tkinter, turtle
You can use the tkinter module for GUI's (screen playing), you can import it in a normal python, or you can do this in a new repl.
You can use the turtle for drawing, it isn't used much for web developing though.
The math and sys
The math is used for math calculations, to calculate math. The sys is used for accessing used variables. I don't really know how I could explain it to you, but for more, click here
Random
The random module is used for randomizing variables and strings. Say you wanted to randomize a list. Here would be the code:
import random
...
a_list = ["JBYT27","pie","cat","dog"]
...
random.choice(a_list)
The output would be a random choice from the variable/list. So it could be pie, JBYT27, cat, or dog. From the random module, there are many things you can import, but the most common are:
choice
randrange
etc.
And that's all for modules. If you want links, click below.
Links for modules:
And that's it!
Hooray! We made it through without sleeping!
Credits to:
Many coders for tutorials
Books and websites
replit
etc.
Links:
Web links:
ranging from a few days or hoursifyoulikereading
Video links:
ranging from 1-12 hoursifyoudon'tlike reading
Otherwise:
ranging from 5 hours-a few daysreplittutorial links
I hope you enjoyed this tutorial! I'll cya on the next post!
stay safe!
Hum ok pretty decent tutorial, some tips:
for comments, you can also have multiple line comments:
"""multiplelinesyay"""
If and Else Statements
elif...?
Common Modules
Common modules include:
os
time
math
sys
replit
turtle
tkinter
etc.
never heard of the "etc." module lol
The most common way people use the os package is to clear the page.
how?
whateveryouwanttoputhere = input("What is your name?")
pretty long variable, not really recommended..
you can also clear the screen using the replit package
how? examples?
and the random module (randint, randrange, choice)?
and you should prolly include more examples, especially on the modules. also what about variables? for loops? while loops? lists? you're spending a bit more time on the modules than other python ueufl things tbh..
but thank for ping I guess. Maybe try adding some more things to the tutorial, but otherwise, nice start! :D |
A Python Tutorial, the Basics
ð A very easy Python Tutorial! ð
#Tutorial Jam
@elipie's jam p i n g
p i n g
Here is a basic tutorial for Python, for beginners!
Table of Contents:
1. The developer of python
2. Comments/Hashtags
3. Print and input statements
f' strings
4. If, Elif, Else statements
5. Common Modules
1. Developer of Python
It was created in the late 1980s by Guido van Rossum in the Netherlands. It was made as a successor to the ABC language, capable interfacing with the Ameoba operating system. It's name is python because when he was thinking about Python, he was also reading, 'Monty Python's Flying Circus'. Guido van Rossum thought that the langauge would need a short, unique name, so he chose Python.
For more about Guido van Rossum, click here
2. Comments/Hastags
Comments are side notes you can write in python. They can be used, as I said before:
sidenotes
instructions or steps
etc.
How to write comments:
#This is a comment
The output is nothing because:
It is a comment and comments are invisible to the computer
Comments are not printed in Python
So just to make sure, hastags are used to make comments. And remember, comments are ignored by the computer.
3. Print and Input statements
1. Print Statements
Print statements, printed as print, are statements used to print sentences or words. So for example:
print("Hello World!")
The output would be:
Hello World!
So you can see that the print statement is used to print words or sentences.
2. Input Statements
Input statements, printed as input, are statements used to 'ask'. For example:
input("What is your name?")
The output would be:
What is your name?
However, with inputs, you can write in them. You can also 'name' the input. Like this:
name = input("What is your name?")
You could respond by doing this:
What is your name? JBYT27
So pretty much, inputs are used to make a value that you can make later.
Then you could add a if statement, but lets discuss that later.
3. f strings
f strings, printed as f(before a qoutation), is used to print or input a value already used. So what I mean is, say I put an f string on a print statement. Like this:
print(f"")
The output right now, is nothing. You didn't print anything. But say you add this:
print(f"Hello {name}!")
It would work, only if the name was named. In other words, say you had a input before and you did this to it:
name = input()
Then the f string would work. Say for the input, you put in your name. Then when the print statement would print:
Hello (whatever your name was)!
Another way you could do this is with commas. This won't use an f string either. They are also simaler. So how you would print it is like this:
name = input()
...
print("Hello ", name, "!")
The output would be the same as well! The commas seperate the 2 strings and they add the name inside. But JBYT27, why not a plus sign? Well, this question you would have to ask Guido van Rossum, but I guess I can answer it a bit. It's really the python syntax. The syntax was made that so when you did a plus sign, not a comma, it would give you an error.
Really, the only time you would use this is to give back your name, or to find if one value was equal to each other, which we'll learn in a sec.
4. If, Elif, Else Statements
1. If Statements
If statements, printed as if, are literally as they are called, if sentences. They see if a sentence equals or is something to an object, it creates an effect. You could think an if statement as a cause and effect. An example of a if statement is:
name = input("What is your name?")
#asking for name
if name == "JBYT27":
print("Hello Administrator!")
The output could be:
What is your name? JBYT27Hello Administrator!
However, say it isn't JBYT27. This is where the else, elif, try, and except statements comes in!
2. Elif Statements
Elif statements, printed as elif are pretty much if statements. It's just that the word else and if are combined. So say you wanted to add more if statements. Then you would do this:
if name == "JBYT27":
print("Hello Administrator!")
elif name == "Code":
print("Hello Code!")
It's just adding more if statements, just adding a else to it!
3. Else Statements
Else statments, printed as else, are like if and elif statements. They are used to tell the computer that if something is not that and it's not that, go to this other result. You can use it like this (following up from the other upper code):
if name == "JBYT27":
print("Hello admin!")
elif name == "Squid":
print("Hello Lord Squod!")
else:
print(f"Hello {name}!")
5. Common Modules
Common modules include:
os
time
math
sys
replit
turtle
tkinter
random
etc.
So all these modules that I listed, i'll tell you how to use, step by step! ;) But wait, what are modules?
Modules are like packages that are pre-installed in python. You just have to fully install it, which is the module(please correct me if im wrong). So like this code:
import os
...
When you do this, you successfully import the os module! But wait, what can you do with it? The most common way people use the os module is to clear the page. By means, it clears the console(the black part) so it makes your screen clear-er. But, since there are many, many, many modules, you can also clear the screen using the replit module. The code is like this:
import replit
...
replit.clear()
But one amazing thing about this importing is you can make things specific. Like say you only want to import pi and sqrt from the math package. This is the code:
from math import pi, sqrt
Let me mention that when you do this, never, ever add an and. Like from ... import ... and .... That is just horrible and stupid and... Just don't do it :)
Next is the time module
You can use the time module for:
time delay
scroll text
And yeah, that's pretty much it (i think)
Note:
All of the import syntax is the same except for the names
Next is tkinter, turtle
You can use the tkinter module for GUI's (screen playing), you can import it in a normal python, or you can do this in a new repl.
You can use the turtle for drawing, it isn't used much for web developing though.
The math and sys
The math is used for math calculations, to calculate math. The sys is used for accessing used variables. I don't really know how I could explain it to you, but for more, click here
Random
The random module is used for randomizing variables and strings. Say you wanted to randomize a list. Here would be the code:
import random
...
a_list = ["JBYT27","pie","cat","dog"]
...
random.choice(a_list)
The output would be a random choice from the variable/list. So it could be pie, JBYT27, cat, or dog. From the random module, there are many things you can import, but the most common are:
choice
randrange
etc.
And that's all for modules. If you want links, click below.
Links for modules:
And that's it!
Hooray! We made it through without sleeping!
Credits to:
Many coders for tutorials
Books and websites
replit
etc.
Links:
Web links:
ranging from a few days or hoursifyoulikereading
Video links:
ranging from 1-12 hoursifyoudon'tlike reading
Otherwise:
ranging from 5 hours-a few daysreplittutorial links
I hope you enjoyed this tutorial! I'll cya on the next post!
stay safe!
You are viewing a single comment. View All |
Making a Slowtype function in python
Slowtyping in Python
I'm sure you've seen projects that print out strings slowly instead of all at once. Today i'll show you how. Here is the code:
def slowtype(str):
newstr = ""
strcount = 0
clearcheck = 0
for element in str:
strcount = strcount + 1
for element in str:
newstr = newstr + element
print(newstr)
time.sleep(0.05)
clearcheck = clearcheck + 1
if(strcount == clearcheck):
break
else:
os.system('clear')
Lets brake it down:
first, the function accepts a string. Then, it makes 2 integers. strcount, which measures the length of the string, and clearcheck, which helps later. To figure out how long the string is, it iterates over each character in a string. Then, it moves to the second loop where the loop adds each character to a string, prints the string, increases clearcheck by 1, sleeps for 0.05 seconds, then checks if strcount is equal to clearcheck. If it is, it doesn't clear the screen. If it isn't, it clears and repeats. This used as a function will give a nice slowtype function to python code!
Here's an example: |
April 2020
In this tutorial we will learn how to customize the listview page of a specific model to show the values of a foreign key.
You can find the source code in this repository
By default the Django admin will display the fields specified in the instruction list_display of your model admin
class ShopAdmin(admin.ModelAdmin):
inlines = [shopplanning_inline]
fieldsets = [
(None, {'fields': ['user','name','email','address']}),
]
list_display = ('user','name', 'email', )
ordering = ('user',)
We have implemented in this tutorial a planning for our Shop model. It should be nice to display the planning information at the same time that we display our list of existing shops. The rendering will look like this
So let’s do this. Since Django 2.1 we can override more Django admin templates. For our use case, the template we want to override is the change_list_results.html. (Find more information about here).
Since we want this specific behaviour only for our Shop model, we need to create a directory in backoffice/templates/admin/backoffice/shop directory (backoffice is our app name).
Then inside this directory we will copy the default Django admin change_list_results.html file
cp ../../../environnements/genericenv/lib/python3.7/site-packages/django/contrib/admin/templates/admin/change_list_results.html backoffice/templates/admin/backoffice/shop/
Please adapt the path with your own path
If we edit the file, we can see items displayed in the listview are build in a for loop which generates a simple html <tr><td> elements of an html table
{% for result in results %}
{% if result.form and result.form.non_field_errors %}
<tr><td colspan="{{ result|length }}">{{ result.form.non_field_errors }}</td></tr>
{% endif %}
<tr class="{% cycle 'row1' 'row2' %}" id=" ">
{% for item in result %}
{{ item }}
{% endfor %}
</tr>
{% endfor %}
So we will need to implement our custom code within this loop.
<tr class="{% cycle 'row1' 'row2' %}" id=" ">
{% for item in result %}
{{ item }}
{% endfor %}
<tr><td></td><td></td><td></td>
<td> <table border>
{% with i=forloop.counter0|stringformat:'s'|add:':' %}
{% with items=cl.result_list|slice:i %}
{% displayPlanning refShop %}
{% endwith %}
{% endwith %}
</table>
</td>
</tr>
</tr>
We add a new line in our html table with a <tr> instruction composed of <td>elements and one of them, will contain a new html table with our planning results.
To display our planning, we need to get the shop reference (the primary key). To do that we use some template tags instruction
{% with i=forloop.counter0|stringformat:'s'|add:':' %}
{% with items=cl.result_list|slice:i %}
And then we need to create our planning HTML instruction. To do that, we will create a custom template tags and then use it
{% displayPlanning items.0.pk %}
To create a custom template tags, just create a new python directory named templatetags inside your app folder.
from django import template
from backoffice.models import *
from django.utils.safestring import mark_safe
import uuid
register = template.Library()
@register.simple_tag
def displayPlanning(refShop):
plannings = ShopPlanning.objects.filter(refShop=refShop).select_related("refShop")
fullHtml=""
for planning in plannings:
html="<tr><td>"+planning.get_dayName()+"</td><td>"+str(planning.startHour)+"</td><td>"+str(planning.endHour)+"</td></tr>"
fullHtml+=html
return mark_safe(fullHtml)
In our customtags.py file, we will register our new tag displayPlanning, which will get the planning for the refShop passed as argument, and then build our HTML table.
We need not forget to load this customtags library inside our change_list_results.html file
{% load customtags %}
|
Here is the Master Listing of Extraction and Build Steps
We are near the end of this major part in our Cooking with Python and KBpedia series in which we cover how to build KBpedia from a series of flat-text (CSV) input files. Though these CSV files may have been modified substantially offline (see, in part, CWPK #36), they are initially generated in an extraction loop, which we covered in CWPK #28-35. We have looked at these various steps in an incremental fashion, building up our code base function by function. This approach is perhaps good from a teaching perspective, but makes it kind of murky how all of the pieces fit together.
In this installment, I will list all of the steps — in sequence — for proceeding from the initial flat file extractions, to offline modifications of those files, and then the steps to build KBpedia again from the resulting new inputs. Since how all of these steps proceed depends critically on configuration settings prior to executing a given step, I also try to capture the main configuration settings appropriate to each step. The steps outlined here cover a full extract-build ‘roundtrip‘ cycle. In the next installment, we will address some of the considerations that go into doing incremental or partial extractions or builds.
Please note that the actual functions in our code modules may be modified slightly from what we presented in our interactive notebook files. These minor changes, when made, are needed to cover gaps or slight errors uncovered during full build and extraction sets. As an example, my initial passes of class structure extractions overlooked the kko.superClasses and rdfs.isDefinedyBy properties. Some issues in CSV extraction and build settings were also discovered that led to excess quoting of strings. The “official” code, then, is what is contained in the cowpoke modules, and not necessarily exactly what is in the notebook pages.
Therefore, of the many installments in this CWPK series, this present one is perhaps one of the most important for you to keep and reference. We will have occasion to summarize other steps in our series, but this installment is the most comprehensive view of the extract-and-build ’roundtrip’ cycle.
Summary of Extraction and Build Steps
Here are the basic steps in a complete roundtrip from extracting to building the knowledge graph anew:
Startup
Extraction
Structure Extraction of Classes
Structure Extraction of Properties
Annotation Extraction of Classes
Annotation Extraction of Properties
Extraction of Mappings
Offline Development and Manipulation
Clean and Test Build Input Files
Build
Build Class Structure
Build Property Structure
Build Class Annotations
Build Property Annotations
Ingest of Mappings
Test Build
The order of extraction and building of classes and properties must begin each phase because we need to have these resources adequately registered to the knowledge graph. Once done, however, there is no ordering requirement for whether mapping or annotation proceeds next. Since annotation changes are always likely in every new version or build, I have listed them before mapping, but that is only a matter of preference.
Each of these steps is described below, plus some key configuration settings as appropriate. We begin with our first step, startup:
1. Startup
from cowpoke.__main__ import *
from cowpoke.config import *
We will re-cap the entire breakdown and build process here. We first begin with structure extraction, first classes and then properties:
2. Extraction
A. Structure Extraction of Classes
We begin with the (mostly) hierarchical typologies and their linkage into KKO and with one another. Since all of the reference concepts in KBpedia are subsumed by the top-level category of Generals, we can specify it alone as a means to retrieve all of the RCs in KBpedia:
### KEY CONFIG SETTINGS (see extract_deck in config.py) ###
# 'krb_src' : 'extract' # Set in master_deck
# 'descent_type' : 'descent',
# 'loop' : 'class_loop',
# 'loop_list' : custom_dict.values(), # Single 'Generals' specified
# 'out_file' : 'C:/1-PythonProjects/kbpedia/v300/extractions/classes/Generals_struct_out.csv',
# 'render' : 'r_iri',
def struct2_extractor(**extract_deck):
print('Beginning structure extraction . . .')
# 1 - render method goes here
r_default = ''
r_label = ''
r_iri = ''
render = extract_deck.get('render')
if render == 'r_default':
set_render_func(default_render_func)
elif render == 'r_label':
set_render_func(render_using_label)
elif render == 'r_iri':
set_render_func(render_using_iri)
else:
print('You have assigned an incorrect render method--execution stopping.')
return
# 2 - note about custom extractions
loop_list = extract_deck.get('loop_list')
loop = extract_deck.get('loop')
out_file = extract_deck.get('out_file')
class_loop = extract_deck.get('class_loop')
property_loop = extract_deck.get('property_loop')
descent_type = extract_deck.get('descent_type')
x = 1
cur_list = []
a_set = []
s_set = []
new_class = 'owl:Thing'
# 5 - what gets passed to 'output'
with open(out_file, mode='w', encoding='utf8', newline='') as output:
csv_out = csv.writer(output)
if loop == 'class_loop':
header = ['id', 'subClassOf', 'parent']
p_item = 'rdfs:subClassOf'
else:
header = ['id', 'subPropertyOf', 'parent']
p_item = 'rdfs:subPropertyOf'
csv_out.writerow(header)
# 3 - what gets passed to 'loop_list'
for value in loop_list:
print(' . . . processing', value)
root = eval(value)
# 4 - descendant or single here
if descent_type == 'descent':
a_set = root.descendants()
a_set = set(a_set)
s_set = a_set.union(s_set)
elif descent_type == 'single':
a_set = root
s_set.append(a_set)
else:
print('You have assigned an incorrect descent method--execution stopping.')
return
print(' . . . processing consolidated set.')
for s_item in s_set:
o_set = s_item.is_a
for o_item in o_set:
row_out = (s_item,p_item,o_item)
csv_out.writerow(row_out)
if loop == 'class_loop':
if s_item not in cur_list:
row_out = (s_item,p_item,new_class)
csv_out.writerow(row_out)
cur_list.append(s_item)
x = x + 1
print('Total unique IDs written to file:', x)
print('The structure extraction for the ', loop, 'is completed.')
struct2_extractor(**extract_deck)
B. Structure Extraction of Properties
See above with the following changes/notes:
### KEY CONFIG SETTINGS (see extract_deck in config.py) ###
# 'krb_src' : 'extract' # Set in master_deck
# 'descent_type' : 'descent',
# 'loop' : 'property_loop',
# 'loop_list' : prop_dict.values(),
# 'out_file' : 'C:/1-PythonProjects/kbpedia/v300/extractions/properties/prop_struct_out.csv',
# 'render' : 'r_default',
C. Annotation Extraction of Classes
Annotations require a different method, though with a similar composition to the prior ones. It was during testing of the full extract-build roundtrip that I realized our initial class annotation extraction routine was missing for the rdfs.isDefinedBy and kko.superClassOf properties. The code in extract.py has been updated to reflect these changes.
Again, we first begin with classes. Note: by convention, I have shifted a couple structural:
### KEY CONFIG SETTINGS (see extract_deck in config.py) ###
# 'krb_src' : 'extract' # Set in master_deck
# 'descent_type' : 'descent',
# 'loop' : 'class_loop',
# 'loop_list' : custom_dict.values(), # Single 'Generals' specified
# 'out_file' : 'C:/1-PythonProjects/kbpedia/v300/extractions/classes/Generals_annot_out.csv',
# 'render' : 'r_label',
def annot2_extractor(**extract_deck):
print('Beginning annotation extraction . . .')
r_default = ''
r_label = ''
r_iri = ''
render = extract_deck.get('render')
if render == 'r_default':
set_render_func(default_render_func)
elif render == 'r_label':
set_render_func(render_using_label)
elif render == 'r_iri':
set_render_func(render_using_iri)
else:
print('You have assigned an incorrect render method--execution stopping.')
return
loop_list = extract_deck.get('loop_list')
loop = extract_deck.get('loop')
out_file = extract_deck.get('out_file')
class_loop = extract_deck.get('class_loop')
property_loop = extract_deck.get('property_loop')
descent_type = extract_deck.get('descent_type')
""" These are internal counters used in this module's methods """
p_set = []
a_ser = []
x = 1
cur_list = []
with open(out_file, mode='w', encoding='utf8', newline='') as output:
csv_out = csv.writer(output)
if loop == 'class_loop':
header = ['id', 'prefLabel', 'subClassOf', 'altLabel',
'definition', 'editorialNote', 'isDefinedBy', 'superClassOf']
else:
header = ['id', 'prefLabel', 'subPropertyOf', 'domain', 'range',
'functional', 'altLabel', 'definition', 'editorialNote']
csv_out.writerow(header)
for value in loop_list:
print(' . . . processing', value)
root = eval(value)
if descent_type == 'descent':
p_set = root.descendants()
elif descent_type == 'single':
a_set = root
p_set.append(a_set)
else:
print('You have assigned an incorrect descent method--execution stopping.')
return
for p_item in p_set:
if p_item not in cur_list:
a_pref = p_item.prefLabel
a_pref = str(a_pref)[1:-1].strip('"\'')
a_sub = p_item.is_a
for a_id, a in enumerate(a_sub):
a_item = str(a)
if a_id > 0:
a_item = a_sub + '||' + str(a)
a_sub = a_item
if loop == 'property_loop':
a_item = ''
a_dom = p_item.domain
for a_id, a in enumerate(a_dom):
a_item = str(a)
if a_id > 0:
a_item = a_dom + '||' + str(a)
a_dom = a_item
a_dom = a_item
a_rng = p_item.range
a_rng = str(a_rng)[1:-1]
a_func = ''
a_item = ''
a_alt = p_item.altLabel
for a_id, a in enumerate(a_alt):
a_item = str(a)
if a_id > 0:
a_item = a_alt + '||' + str(a)
a_alt = a_item
a_alt = a_item
a_def = p_item.definition
a_def = str(a_def)[2:-2]
a_note = p_item.editorialNote
a_note = str(a_note)[1:-1]
if loop == 'class_loop':
a_isby = p_item.isDefinedBy
a_isby = str(a_isby)[2:-2]
a_isby = a_isby + '/'
a_item = ''
a_super = p_item.superClassOf
for a_id, a in enumerate(a_super):
a_item = str(a)
if a_id > 0:
a_item = a_super + '||' + str(a)
a_super = a_item
a_super = a_item
if loop == 'class_loop':
row_out = (p_item,a_pref,a_sub,a_alt,a_def,a_note,a_isby,a_super)
else:
row_out = (p_item,a_pref,a_sub,a_dom,a_rng,a_func,
a_alt,a_def,a_note)
csv_out.writerow(row_out)
cur_list.append(p_item)
x = x + 1
print('Total unique IDs written to file:', x)
print('The annotation extraction for the', loop, 'is completed.')
annot2_extractor(**extract_deck)
d=csv.get_dialect('excel')
print("Delimiter: ", d.delimiter)
print("Doublequote: ", d.doublequote)
print("Escapechar: ", d.escapechar)
print("lineterminator: ", repr(d.lineterminator))
print("quotechar: ", d.quotechar)
print("Quoting: ", d.quoting)
print("skipinitialspace: ", d.skipinitialspace)
print("strict: ", d.strict)
D. Annotation Extraction of Properties
See above with the following changes/notes:
### KEY CONFIG SETTINGS (see extract_deck in config.py) ###
# 'krb_src' : 'extract' # Set in master_deck
# 'descent_type' : 'descent',
# 'loop' : 'property_loop',
# 'loop_list' : prop_dict.values(),
# 'out_file' : 'C:/1-PythonProjects/kbpedia/v300/extractions/properties/prop_annot_out.csv',
# 'render' : 'r_default',
E. Extraction of Mappings
Mappings to external sources is an integral part of KBpedia, as is likely the case for any similar, large-scale knowledge graph. As such, extractions of existing mappings is also a logical step in the overall extraction process.
Though we will not address mappings until CWPK #49, those steps belong here in the overall set of procedures for the extract-build roundtrip process.
3. Offline Development and Manipulation
The above extraction steps can capture changes over time that have been made with an ontology editing tool such as Protégé. Once that knowledge graph is at a state of readiness after using Protégé, and more major changes are desired to your knowledge graph, it is sometimes easier to work with flat files in bulk. I discussed some of my own steps using spreadsheets in CWPK #36, and I will also walk through some refactorings using bulk files in our next installment, CWPK #48. That case study will help us see at least a few of the circumstances that warrant bulk refactoring. Major additions or changes to the typologies is also an occasion for such bulk activities.
At any rate, this step in the overall roundtripping process is where such modifications are made before rebuilding the knowledge graph anew.
4. Clean and Test Build Input Files
We covered these topics in CWPK #45. If you recall, cleaning and testing of input files occurs at this logical point, but we delayed discussing it in detail until we had covered the overall build process steps. This is why this sequence number for this installment appears a bit out of order.
5. Build
The start of the build cycle is to have all structure, annotation, and mapping files in proper shape and vetted for encoding and quality.
(Note: where ‘Generals’ is specified, keep the initial capitalization, since it is also generated as such from the extraction routines and is consistent with typology naming.)
A. Build Class Structure
We start with the knowledge graph classes and their subsumption relationships, as specified in one or more class structure CSV input files. In this case, we are doing a full build, so we begin with the KKO and RC stubs, plus run our Generals typology since it is inclusive:
### KEY CONFIG SETTINGS (see build_deck in config.py) ### # Option 1: from Generals
# 'kb_src' : 'start' # Set in master_deck; only step with 'start'
# 'loop_list' : custom_dict.values(), # Single 'Generals' specified
# 'loop' : 'class_loop',
# 'base' : 'C:/1-PythonProjects/kbpedia/v300/build_ins/classes/',
# 'ext' : '_struct_out.csv', # Note change
# 'out_file' : 'C:/1-PythonProjects/kbpedia/v300/targets/ontologies/kbpedia_reference_concepts.csv',
### KEY CONFIG SETTINGS (see build_deck in config.py) ### # Option 2: from all typologies
# 'kb_src' : 'start' # Set in master_deck; only step with 'start'
# 'loop_list' : typol_dict.values(),
# 'loop' : 'class_loop',
# 'base' : 'C:/1-PythonProjects/kbpedia/v300/build_ins/classes/',
# 'ext' : '.csv', # Note change
# 'out_file' : 'C:/1-PythonProjects/kbpedia/v300/targets/ontologies/kbpedia_reference_concepts.csv',
from cowpoke.build import *
def class2_struct_builder(**build_deck):
print('Beginning KBpedia class structure build . . .')
kko_list = typol_dict.values()
loop_list = build_deck.get('loop_list')
loop = build_deck.get('loop')
base = build_deck.get('base')
ext = build_deck.get('ext')
out_file = build_deck.get('out_file')
if loop is not 'class_loop':
print("Needs to be a 'class_loop'; returning program.")
return
for loopval in loop_list:
print(' . . . processing', loopval)
frag = loopval.replace('kko.','')
in_file = (base + frag + ext)
with open(in_file, 'r', encoding='utf8') as input:
is_first_row = True
reader = csv.DictReader(input, delimiter=',', fieldnames=['id', 'subClassOf', 'parent'])
for row in reader:
r_id = row['id']
r_parent = row['parent']
id = row_clean(r_id, iss='i_id')
id_frag = row_clean(r_id, iss='i_id_frag')
parent = row_clean(r_parent, iss='i_parent')
parent_frag = row_clean(r_parent, iss='i_parent_frag')
if is_first_row:
is_first_row = False
continue
with rc:
kko_id = None
kko_frag = None
if parent_frag == 'Thing':
if id in kko_list:
kko_id = id
kko_frag = id_frag
else:
id = types.new_class(id_frag, (Thing,))
if kko_id != None:
with kko:
kko_id = types.new_class(kko_frag, (Thing,))
with open(in_file, 'r', encoding='utf8') as input:
is_first_row = True
reader = csv.DictReader(input, delimiter=',', fieldnames=['id', 'subClassOf', 'parent'])
for row in reader:
r_id = row['id']
r_parent = row['parent']
id = row_clean(r_id, iss='i_id')
id_frag = row_clean(r_id, iss='i_id_frag')
parent = row_clean(r_parent, iss='i_parent')
parent_frag = row_clean(r_parent, iss='i_parent_frag')
if is_first_row:
is_first_row = False
continue
with rc:
kko_id = None
kko_frag = None
kko_parent = None
kko_parent_frag = None
if parent_frag is not 'Thing':
if id in kko_list:
continue
elif parent in kko_list:
kko_id = id
kko_frag = id_frag
kko_parent = parent
kko_parent_frag = parent_frag
else:
var1 = getattr(rc, id_frag)
var2 = getattr(rc, parent_frag)
if var2 == None:
continue
else:
print(var1, var2)
var1.is_a.append(var2)
if kko_parent != None:
with kko:
if kko_id in kko_list:
continue
else:
var1 = getattr(rc, kko_frag)
var2 = getattr(kko, kko_parent_frag)
var1.is_a.append(var2)
with open(in_file, 'r', encoding='utf8') as input:
is_first_row = True
reader = csv.DictReader(input, delimiter=',', fieldnames=['id', 'subClassOf', 'parent'])
for row in reader:
r_id = row['id']
r_parent = row['parent']
id = row_clean(r_id, iss='i_id')
id_frag = row_clean(r_id, iss='i_id_frag')
parent = row_clean(r_parent, iss='i_parent')
parent_frag = row_clean(r_parent, iss='i_parent_frag')
if is_first_row:
is_first_row = False
continue
if parent_frag == 'Thing':
var1 = getattr(rc, id_frag)
var2 = getattr(owl, parent_frag)
try:
var1.is_a.remove(var2)
except Exception:
continue
kb.save(out_file, format="rdfxml")
print('KBpedia class structure build is complete.')
class2_struct_builder(**build_deck)
B. Build Property Structure
After classes, when then add property structure to the system. Note, however, that we now switch to our normal ‘standard’ kb source:
### KEY CONFIG SETTINGS (see build_deck in config.py) ###
# 'kb_src' : 'standard' # Set in master_deck
# 'loop_list' : prop_dict.values(),
# 'loop' : 'property_loop',
# 'base' : 'C:/1-PythonProjects/kbpedia/v300/build_ins/properties/',
# 'ext' : '_struct_out.csv',
# 'out_file' : 'C:/1-PythonProjects/kbpedia/v300/targets/ontologies/kbpedia_reference_concepts.csv',
# 'frag' : set in code block; see below
def prop2_struct_builder(**build_deck):
print('Beginning KBpedia property structure build . . .')
loop_list = build_deck.get('loop_list')
loop = build_deck.get('loop')
base = build_deck.get('base')
ext = build_deck.get('ext')
out_file = build_deck.get('out_file')
if loop is not 'property_loop':
print("Needs to be a 'property_loop'; returning program.")
return
for loopval in loop_list:
print(' . . . processing', loopval)
frag = 'prop'
in_file = (base + frag + ext)
print(in_file)
with open(in_file, 'r', encoding='utf8') as input:
is_first_row = True
reader = csv.DictReader(input, delimiter=',', fieldnames=['id', 'subPropertyOf', 'parent'])
for row in reader:
if is_first_row:
is_first_row = False
continue
r_id = row['id']
r_parent = row['parent']
value = r_parent.find('owl.')
if value == 0:
continue
value = r_id.find('rc.')
if value == 0:
id_frag = r_id.replace('rc.', '')
parent_frag = r_parent.replace('kko.', '')
var2 = getattr(kko, parent_frag)
with rc:
r_id = types.new_class(id_frag, (var2,))
kb.save(out_file, format="rdfxml")
print(kbpedia)
print(out_file)
print('KBpedia property structure build is complete.')
prop2_struct_builder(**build_deck)
C. Build Class Annotations
With the subsumption structure built, we next load our annotations, beginning with the class ones:
### KEY CONFIG SETTINGS (see build_deck in config.py) ###
# 'kb_src' : 'standard'
# 'loop_list' : file_dict.values(), # see 'in_file'
# 'loop' : 'class_loop',
# 'in_file' : 'C:/1-PythonProjects/kbpedia/v300/build_ins/classes/Generals_annot_out.csv',
# 'out_file' : 'C:/1-PythonProjects/kbpedia/v300/target/ontologies/kbpedia_reference_concepts.csv',
def class2_annot_build(**build_deck):
print('Beginning KBpedia class annotation build . . .')
loop_list = build_deck.get('loop_list')
loop = build_deck.get('loop')
class_loop = build_deck.get('class_loop')
out_file = build_deck.get('out_file')
if loop is not 'class_loop':
print("Needs to be a 'class_loop'; returning program.")
return
for loopval in loop_list:
print(' . . . processing', loopval)
in_file = loopval
with open(in_file, 'r', encoding='utf8') as input:
is_first_row = True
reader = csv.DictReader(input, delimiter=',', fieldnames=['id', 'prefLabel', 'subClassOf',
'altLabel', 'definition', 'editorialNote', 'isDefinedBy', 'superClassOf'])
for row in reader:
r_id = row['id']
id = getattr(rc, r_id)
if id == None:
print(r_id)
continue
r_pref = row['prefLabel']
r_alt = row['altLabel']
r_def = row['definition']
r_note = row['editorialNote']
r_isby = row['isDefinedBy']
r_super = row['superClassOf']
if is_first_row:
is_first_row = False
continue
id.prefLabel.append(r_pref)
i_alt = r_alt.split('||')
if i_alt != ['']:
for item in i_alt:
id.altLabel.append(item)
id.definition.append(r_def)
i_note = r_note.split('||')
if i_note != ['']:
for item in i_note:
id.editorialNote.append(item)
id.isDefinedBy.append(r_isby)
i_super = r_super.split('||')
if i_super != ['']:
for item in i_super:
item = 'http://kbpedia.org/kko/rc/' + item
# Code block to be used if objectProperty; 5.5 hr load
# item = getattr(rc, item)
# if item == None:
# print('Failed assignment:', r_id, item)
# continue
# else:
id.superClassOf.append(item)
kb.save(out_file, format="rdfxml")
print('KBpedia class annotation build is complete.')
class2_annot_build(**build_deck)
D. Build Property Annotations
And then the property annotations:
### KEY CONFIG SETTINGS (see build_deck in config.py) ###
# 'kb_src' : 'standard'
# 'loop_list' : file_dict.values(), # see 'in_file'
# 'loop' : 'property_loop',
# 'in_file' : 'C:/1-PythonProjects/kbpedia/v300/build_ins/properties/prop_annot_out.csv',
# 'out_file' : 'C:/1-PythonProjects/kbpedia/v300/target/ontologies/kbpedia_reference_concepts.csv',
def prop2_annot_build(**build_deck):
print('Beginning KBpedia property annotation build . . .')
xsd = kb.get_namespace('http://w3.org/2001/XMLSchema#')
wgs84 = kb.get_namespace('http://www.opengis.net/def/crs/OGC/1.3/CRS84')
loop_list = build_deck.get('loop_list')
loop = build_deck.get('loop')
out_file = build_deck.get('out_file')
x = 1
if loop is not 'property_loop':
print("Needs to be a 'property_loop'; returning program.")
return
for loopval in loop_list:
print(' . . . processing', loopval)
in_file = loopval
with open(in_file, 'r', encoding='utf8') as input:
is_first_row = True
reader = csv.DictReader(input, delimiter=',', fieldnames=['id', 'prefLabel', 'subPropertyOf', 'domain',
'range', 'functional', 'altLabel', 'definition', 'editorialNote'])
for row in reader:
r_id = row['id']
r_pref = row['prefLabel']
r_dom = row['domain']
r_rng = row['range']
r_alt = row['altLabel']
r_def = row['definition']
r_note = row['editorialNote']
r_id = r_id.replace('rc.', '')
id = getattr(rc, r_id)
if id == None:
continue
if is_first_row:
is_first_row = False
continue
id.prefLabel.append(r_pref)
i_dom = r_dom.split('||')
if i_dom != ['']:
for item in i_dom:
if 'kko.' in item:
item = item.replace('kko.', '')
item = getattr(kko, item)
id.domain.append(item)
elif 'owl.' in item:
item = item.replace('owl.', '')
item = getattr(owl, item)
id.domain.append(item)
elif item == ['']:
continue
elif item != '':
item = getattr(rc, item)
if item == None:
continue
else:
id.domain.append(item)
else:
print('No domain assignment:', 'Item no:', x, item)
continue
if 'owl.' in r_rng:
r_rng = r_rng.replace('owl.', '')
r_rng = getattr(owl, r_rng)
id.range.append(r_rng)
elif 'string' in r_rng:
id.range = [str]
elif 'decimal' in r_rng:
id.range = [float]
elif 'anyuri' in r_rng:
id.range = [normstr]
elif 'boolean' in r_rng:
id.range = [bool]
elif 'datetime' in r_rng:
id.range = [datetime.datetime]
elif 'date' in r_rng:
id.range = [datetime.date]
elif 'time' in r_rng:
id.range = [datetime.time]
elif 'wgs84.' in r_rng:
r_rng = r_rng.replace('wgs84.', '')
r_rng = getattr(wgs84, r_rng)
id.range.append(r_rng)
elif r_rng == ['']:
print('r_rng = empty:', r_rng)
else:
print('r_rng = else:', r_rng, id)
# id.range.append(r_rng)
i_alt = r_alt.split('||')
if i_alt != ['']:
for item in i_alt:
id.altLabel.append(item)
id.definition.append(r_def)
i_note = r_note.split('||')
if i_note != ['']:
for item in i_note:
id.editorialNote.append(item)
x = x + 1
kb.save(out_file, format="rdfxml")
print('KBpedia property annotation build is complete.')
prop2_annot_build(**build_deck)
Beginning KBpedia property annotation build . . .
. . . processing C:/1-PythonProjects/kbpedia/v300/build_ins/properties/prop_annot_out.csv
r_rng = else: xsd.anyURI rc.release_notes
r_rng = else: xsd.anyURI rc.schema_version
r_rng = else: xsd.anyURI rc.unit_code
r_rng = else: xsd.anyURI rc.property_id
r_rng = else: xsd.anyURI rc.ticket_token
r_rng = else: xsd.anyURI rc.role_name
r_rng = else: xsd.anyURI rc.feature_list
r_rng = else: xsd.hexBinary rc.associated_media
r_rng = else: xsd.hexBinary rc.encoding
r_rng = else: xsd.hexBinary rc.encodings
r_rng = else: xsd.hexBinary rc.photo
r_rng = else: xsd.hexBinary rc.photos
r_rng = else: xsd.hexBinary rc.primary_image_of_page
r_rng = else: xsd.hexBinary rc.thumbnail
r_rng = else: xsd.anyURI rc.code_repository
r_rng = else: xsd.anyURI rc.content_url
r_rng = else: xsd.anyURI rc.discussion_url
r_rng = else: xsd.anyURI rc.download_url
r_rng = else: xsd.anyURI rc.embed_url
r_rng = else: xsd.anyURI rc.install_url
r_rng = else: xsd.anyURI rc.map
r_rng = else: xsd.anyURI rc.maps
r_rng = else: xsd.anyURI rc.payment_url
r_rng = else: xsd.anyURI rc.reply_to_url
r_rng = else: xsd.anyURI rc.service_url
r_rng = else: xsd.anyURI rc.significant_link
r_rng = else: xsd.anyURI rc.significant_links
r_rng = else: xsd.anyURI rc.target_url
r_rng = else: xsd.anyURI rc.thumbnail_url
r_rng = else: xsd.anyURI rc.tracking_url
r_rng = else: xsd.anyURI rc.url
r_rng = else: xsd.anyURI rc.related_link
r_rng = else: xsd.anyURI rc.genre_schema
r_rng = else: xsd.anyURI rc.same_as
r_rng = else: xsd.anyURI rc.action_platform
r_rng = else: xsd.anyURI rc.fees_and_commissions_specification
r_rng = else: xsd.anyURI rc.requirements
r_rng = else: xsd.anyURI rc.software_requirements
r_rng = else: xsd.anyURI rc.storage_requirements
r_rng = else: xsd.anyURI rc.artform
r_rng = else: xsd.anyURI rc.artwork_surface
r_rng = else: xsd.anyURI rc.course_mode
r_rng = else: xsd.anyURI rc.encoding_format
r_rng = else: xsd.anyURI rc.file_format_schema
r_rng = else: xsd.anyURI rc.named_position
r_rng = else: xsd.anyURI rc.surface
r_rng = else: wgs84 rc.geo_midpoint
r_rng = else: xsd.anyURI rc.memory_requirements
r_rng = else: wgs84 rc.aerodrome_reference_point
r_rng = else: wgs84 rc.coordinate_location
r_rng = else: wgs84 rc.coordinates_of_easternmost_point
r_rng = else: wgs84 rc.coordinates_of_northernmost_point
r_rng = else: wgs84 rc.coordinates_of_southernmost_point
r_rng = else: wgs84 rc.coordinates_of_the_point_of_view
r_rng = else: wgs84 rc.coordinates_of_westernmost_point
r_rng = else: wgs84 rc.geo
r_rng = else: xsd.anyURI rc.additional_type
r_rng = else: xsd.anyURI rc.application_category
r_rng = else: xsd.anyURI rc.application_sub_category
r_rng = else: xsd.anyURI rc.art_medium
r_rng = else: xsd.anyURI rc.sport_schema
KBpedia property annotation build is complete.
E. Ingest of Mappings
Mappings to external sources are an integral part of KBpedia, as is likely the case for any similar, large-scale knowledge graph. As such, ingest of new or revised mappings is also a logical step in the overall build process, and occurs at this point in the sequence.
Though we will not address mappings until CWPK #49, those steps belong here in the overall set of procedures for the extract-build roundtrip process.
6. Test Build
We then conduct our series of logic tests (CWPK #43). This portion of the process may actually be the longest of all, given that it may take multiple iterations to pass all of these tests. However, in other circumstances, the build tests may also go quite quickly if relatively few changes were made between versions.
Wrap Up
Of course, these steps could be embedded in an overall ‘complete’ extract and build routine, but I have not done so.
Before we conclude this major part in our CWPK series, we next proceed to show how all of the steps may be combined to achieve a rather large re-factoring of all of KBpedia. |
Hello guys,
I am trying to take vibration data using an accelerometer. I didn't connect any ADC. The accelerometer is taking data at a rate of 400Hz but my pi is stoing it at a different rate. For example, in one seocnd its 250 samples the next second its 270. This is a big problem as I want to have the FFT of the measured signal. To do that I need to have a fixed number of sample per second. Is there anyway to improve this?
I am trying to take vibration data using an accelerometer. I didn't connect any ADC. The accelerometer is taking data at a rate of 400Hz but my pi is stoing it at a different rate. For example, in one seocnd its 250 samples the next second its 270. This is a big problem as I want to have the FFT of the measured signal. To do that I need to have a fixed number of sample per second. Is there anyway to improve this?
Welcome to the forums.
Sorry, but you've not provided any real information that will help solve your problem.
What Pi are you using ?
What language is your application written in ?
Post your non-working code and the output that shows the problem.
What is the accelerometer you are using ?
How is it connected to the PI ?
PeterO
Sorry, but you've not provided any real information that will help solve your problem.
What Pi are you using ?
What language is your application written in ?
Post your non-working code and the output that shows the problem.
What is the accelerometer you are using ?
How is it connected to the PI ?
PeterO
Discoverer of the PI2 XENON DEATH FLASH!
Interests: C,Python,PIC,Electronics,Ham Radio (G0DZB),1960s British Computers.
"The primary requirement (as we've always seen in your examples) is that the code is readable. " Dougie Lawson
Interests: C,Python,PIC,Electronics,Ham Radio (G0DZB),1960s British Computers.
"The primary requirement (as we've always seen in your examples) is that the code is readable. " Dougie Lawson
Thanks for the reply.
I am using Pi 3 B+ with an accelerometer adafruit MMA8451. The language I am using is python. This is the particulart coding portion where my pi is capturing data from accelerometer and writing it to a csv file.
while True:
x, y, z = sensor.acceleration
time_now = datetime.datetime.now().strftime("%Y-%m-%d")
TimePresent = time.time()
Timer = TimePresent - TimeStart
X = x #+ Calcx
Y = y #+ Calcy
Z = z #+ Calcz
count = count + 1
print('DateTime={0} Time ={1} X={2:0.3f} m/s^2 Y:{3:0.3f} m/s^2 Z:{4:0.3f} m/s^2 count={5}'.format(time_now, Timer, X, Y, Z, count))
sensorwriter.writerow([time_now, Timer, X, Y, Z, count])
time.sleep(1/150)
if Timer > TimingA:
exit()
The sampling frquency of the accelerometer is 800Hz. The pi should store 150 datas per second accoridng to the code. But its not storing that many. In addition, the number of samples its storing is different everytime.
I am using Pi 3 B+ with an accelerometer adafruit MMA8451. The language I am using is python. This is the particulart coding portion where my pi is capturing data from accelerometer and writing it to a csv file.
while True:
x, y, z = sensor.acceleration
time_now = datetime.datetime.now().strftime("%Y-%m-%d")
TimePresent = time.time()
Timer = TimePresent - TimeStart
X = x #+ Calcx
Y = y #+ Calcy
Z = z #+ Calcz
count = count + 1
print('DateTime={0} Time ={1} X={2:0.3f} m/s^2 Y:{3:0.3f} m/s^2 Z:{4:0.3f} m/s^2 count={5}'.format(time_now, Timer, X, Y, Z, count))
sensorwriter.writerow([time_now, Timer, X, Y, Z, count])
time.sleep(1/150)
if Timer > TimingA:
exit()
The sampling frquency of the accelerometer is 800Hz. The pi should store 150 datas per second accoridng to the code. But its not storing that many. In addition, the number of samples its storing is different everytime.
Added forum
Codetags to make sense of the Python fragment.
Ankit9148 wrote: ↑Fri May 17, 2019 1:47 pmThanks for the reply.
I am using Pi 3 B+ with an accelerometer adafruit MMA8451. The language I am using is python. This is the particulart coding portion where my pi is capturing data from accelerometer and writing it to a csv file.
The sampling frquency of the accelerometer is 800Hz. The pi should store 150 datas per second accoridng to the code. But its not storing that many. In addition, the number of samples its storing is different everytime.
Code: Select all
while True:
x, y, z = sensor.acceleration
time_now = datetime.datetime.now().strftime("%Y-%m-%d")
TimePresent = time.time()
Timer = TimePresent - TimeStart
X = x #+ Calcx
Y = y #+ Calcy
Z = z #+ Calcz
count = count + 1
print('DateTime={0} Time ={1} X={2:0.3f} m/s^2 Y:{3:0.3f} m/s^2 Z:{4:0.3f} m/s^2 count={5}'.format(time_now, Timer, X, Y, Z, count))
sensorwriter.writerow([time_now, Timer, X, Y, Z, count])
time.sleep(1/150)
if Timer > TimingA:
exit()
I can not give you a solution but I can make a few comments. The first comment is that you must get used to using [code] tagsAnkit9148 wrote: ↑Fri May 17, 2019 1:47 pmThanks for the reply.
I am using Pi 3 B+ with an accelerometer adafruit MMA8451. The language I am using is python. This is the particulart coding portion where my pi is capturing data from accelerometer and writing it to a csv file.
while True:
x, y, z = sensor.acceleration
time_now = datetime.datetime.now().strftime("%Y-%m-%d")
TimePresent = time.time()
Timer = TimePresent - TimeStart
X = x #+ Calcx
Y = y #+ Calcy
Z = z #+ Calcz
count = count + 1
print('DateTime={0} Time ={1} X={2:0.3f} m/s^2 Y:{3:0.3f} m/s^2 Z:{4:0.3f} m/s^2 count={5}'.format(time_now, Timer, X, Y, Z, count))
sensorwriter.writerow([time_now, Timer, X, Y, Z, count])
time.sleep(1/150)
if Timer > TimingA:
exit()
The sampling frquency of the accelerometer is 800Hz. The pi should store 150 datas per second accoridng to the code. But its not storing that many. In addition, the number of samples its storing is different everytime.
Code: Select all
while True:
x, y, z = sensor.acceleration
time_now = datetime.datetime.now().strftime("%Y-%m-%d")
TimePresent = time.time()
Timer = TimePresent - TimeStart
X = x #+ Calcx
Y = y #+ Calcy
Z = z #+ Calcz
count = count + 1
print('DateTime={0} Time ={1} X={2:0.3f} m/s^2 Y:{3:0.3f} m/s^2 Z:{4:0.3f} m/s^2 count={5}'.format(time_now, Timer, X, Y, Z, count))
sensorwriter.writerow([time_now, Timer, X, Y, Z, count])
time.sleep(1/150)
if Timer > TimingA:
exit()
How much time do you think that the command will take to be executed ? How much time is needed to read the sensor and how much time is needed to print ? Have you timed this part of the code ?
Take some time to think about this and only then read the next lines of this post.
According to you the code should store 150 samples each second, but that can never be reached because of the 0,0066666666666667 second sleep. Theoretically this would be 150 sleeps per second but this ignores the fact that your code needs some time execute, for this reason 150 samples will never be captured.
I would suggest google "python sleep granularity" to find information on the "accuracy" on the sleep command.
I am sure that it can be done somehow but I do not have an answer, maybe somebody else can help.
The road to insanity is paved with static ip addresses
According to your comment, it should be okay if I take time.sleep(1/100). But its not still capturing 100 samples per second. Its like sometimes 70, sometimes 80. The TimingA I am taking is 1 second to understand the output number of samples clearly.
Thank you for your suggestions anyway.
Thank you for your suggestions anyway.
Absolutely correct, working as expected, you have to adjust your thinking.
Consider this, if the time needed to read the data is 0.002 seconds and the time to print is 0.001 seconds then how many reads/prints can done each second ? (answer: 333 and a bit). Now add the sleep 1/100 (or 0.01 sec) for each cycle giving 0.013 sec. That means that in one second about 76.92 transactions can be performed.
I am now going to give you something to consider, you need a different method for timing. I do not have a solution but I would look for a way to wake up the loop using a timer to wake up the task.
The road to insanity is paved with static ip addresses
You have to read more carefully.
read sample + store sample + print sample + sleep 0.01 seconds will only do 100 samples per second if read/store/print each take no time. They each take some time. You need to factor in that processing time.
read sample + store sample + print sample + sleep 0.01 seconds will only do 100 samples per second if read/store/print each take no time. They each take some time. You need to factor in that processing time.
joan wrote: You have to read more carefully.
read sample + store sample + print sample + sleep 0.01 seconds will only do 100 samples per second if read/store/print each take no time. They each take some time. You need to factor in that processing time.
Yes, I understand that. But why the number of samples are changing evrytime. Some time its takeing 70 samples, the other time its taking 80 samples. Is there any way that I can have fixed number of samples per second with high precision in case the sampling rate is as high as 100 or 150?
Here's two to start with:
Record the time before the processing starts. When you get to the sleep, calculate the remaining time to the next sample time and use that.
Use a timer-driven interrupt to trigger the processing.
You will have to determine the top sampling rate available in Python (an interpreted language, AKA relatively slow). It may be necessary to use a compiled language (C, C++, etc) for better performance.
You may need to run on an isolated core to avoid system interrupts, which can disrupt timing quite badly.
Ultimately, you may need to use a real time (RT) kernel to allow precise timing.
Location:345th cell on the right of the 210th row of L2 cache
@davidcotton: you were faster
The road to insanity is paved with static ip addresses
Even taking into account the time taken to execute the code, as Ernst hinted at earlier
* If your process receives a signal whilst it is in a
time.sleep()doesn't sleep forexactlythe specified length time, rather it sleeps forat least* the specified length of time. If, after the specified time has passed, there other processes (especially those with a higher priority than yours) that are also ready to run then your process can be kept sleeping until the Kernel decides that you can have some CPU time.
* If your process receives a signal whilst it is in a
time.sleep()then the sleep can return before the specified length of time has elapsed.
She who travels light — forgot something.
Please note that my name
Please note that my name
doesn'tstart with thecharacter so can people please stop writing it as if it does!@
The last sentence in the preceding post answers something that was puzzling me - why does the script store
moredata than expected... ?
The accelerometer is taking data at a rate of 400Hz but my pi is stoing it at a different rate. For example, in one seocnd its 250 samples the next second its 270.
Just noticed PEP 475, as of Python 3.5
PEP 475 states that all standard library system calls that could terminate early due to interrupts will now (as of Python 3.5) automatically retry with any timeouts adjusted.
time.sleep()won't return early on a signal unless a signal handler catches the signal and raises an exception.
PEP 475 states that all standard library system calls that could terminate early due to interrupts will now (as of Python 3.5) automatically retry with any timeouts adjusted.
She who travels light — forgot something.
Please note that my name
Please note that my name
doesn'tstart with thecharacter so can people please stop writing it as if it does!@ |
Now I know that it is not safe to modify the list during an iterative looping. However, suppose I have a list of strings, and I want to strip the strings themselves. Does replacement of mutable values count as modification?
It's considered poor form. Use a list comprehension instead, with slice assignment if you need to retain existing references to the list.
a = [1, 3, 5]
b = a
a[:] = [x + 2 for x in a]
print(b)
Since the loop below only modifies elements already seen, it would be considered acceptable:
a = ['a',' b', 'c ', ' d ']
for i, s in enumerate(a):
a[i] = s.strip()
print(a) # -> ['a', 'b', 'c', 'd']
Which is different from:
a[:] = [s.strip() for s in a]
in that it doesn't require the creation of a temporary list and an assignment of it to replace the original, although it does require more indexing operations.
Caution: Although you can modify entries this way, you can't change the number of items in the list without risking the chance of encountering problems.
Here's an example of what I mean—deleting an entry messes-up the indexing from that point on:
b = ['a', ' b', 'c ', ' d ']
for i, s in enumerate(b):
if s.strip() != b[i]: # leading or trailing whitespace?
del b[i]
print(b) # -> ['a', 'c '] # WRONG!
(The result is wrong because it didn't delete all the items it should have.)
Update
Since this is a fairly popular answer, here's how to effectively delete entries "in-place" (even though that's not exactly the question):
b = ['a',' b', 'c ', ' d ']
b[:] = [entry for entry in b if entry.strip() == entry]
print(b) # -> ['a'] # CORRECT
|
Amazon SageMaker のアップデート、東京リージョン、CloudFormation、Chainer、GreenGrass ML
本日、東京での AWS Summit で、Amazon SageMaker の多数のアップデートや新機能が発表されました。本日より、SageMaker が アジアパシフィック (東京) で利用可能になります!また、SageMaker は CloudFormation もサポートします。SageMaker Python SDK では、MXNet および Tensorflow に加えて、機械学習の新しいフレームワークである Chainer も利用できます。最後に、いくつかのデバイスでの Chainer モデルの実行に対するサポートが AWS Greengrass Machine Learning に追加されました。
Amazon SageMaker Chainer エスティメーター
Chainer は、定評がある、柔軟で直感的な深層学習のフレームワークです。Chainer ネットワークは、ネットワークトポロジが順方向計算によって動的に定義される「Define-by-Run」スキームで動作します。これは、ネットワークのトポロジがデータとは別に定義される「定義と実行」スキームで動作する他の多くのフレームワークとは対照的です。多くの開発者は、ネイティブの Python の構造やツールでネットワークを書くことができるので、Chainer スキームを重宝しています。
幸いなことに、SageMaker で Chainer を使用することは、TensorFlow または MXNet のエスティメーターを使用するのと同じくらい簡単です実際には、既存のスクリプトを使用して、少し修正するだけで SageMaker でトレーニングすることができるので、さらに簡単かもしれません。TensorFlow または MXNet を使用する場合には、特定の署名を持つトレーニング機能を実装する必要があります。Chainer を使用する場合は、 SM_MODEL_DIR、 SM_NUM_GPUS、その他の環境変数から簡単に読み込めるので、スクリプトはより移植しやすくなります。既存のスクリプトを、 if __name__ == '__main__': のガードでラッピングして、ローカルまたは sagemaker で呼び出すことができます。
import argparse
import os
if __name__ =='__main__':
parser = argparse.ArgumentParser()
# hyperparameters sent by the client are passed as command-line arguments to the script.
parser.add_argument('--epochs', type=int, default=10)
parser.add_argument('--batch-size', type=int, default=64)
parser.add_argument('--learning-rate', type=float, default=0.05)
# Data, model, and output directories
parser.add_argument('--output-data-dir', type=str, default=os.environ['SM_OUTPUT_DATA_DIR'])
parser.add_argument('--model-dir', type=str, default=os.environ['SM_MODEL_DIR'])
parser.add_argument('--train', type=str, default=os.environ['SM_CHANNEL_TRAIN'])
parser.add_argument('--test', type=str, default=os.environ['SM_CHANNEL_TEST'])
args, _ = parser.parse_known_args()
# ... load from args.train and args.test, train a model, write model to args.model_dir.
次に、そのスクリプトをローカルで実行するか、SageMaker Python SDK を使用して SageMaker の一部の GPU インスタンスで起動することができます。ハイパーパラメータは CLI コマンドとしてスクリプトに渡され、上記の環境変数は自動集計されます。fit を呼び出すと、渡される入力チャネルに SM_CHANNEL_* 環境変数が設定されます。
from sagemaker.chainer.estimator import Chainer
# Create my estimator
chainer_estimator = Chainer(
entry_point='example.py',
train_instance_count=1,
train_instance_type='ml.p3.2xlarge',
hyperparameters={'epochs': 10, 'batch-size': 64}
)
# Train my estimator
chainer_estimator.fit({'train': train_input, 'test': test_input})
# Deploy my estimator to a SageMaker Endpoint and get a Predictor
predictor = chainer_estimator.deploy(
instance_type="ml.m4.xlarge",
initial_instance_count=1
)
ここで、Chainer でのトレーニングおよびホスティングのために独自の docker コンテナを用意する代わりに、スクリプトをそのまま使うことができます。完全な sagemaker-chainer-containers は github で確認できます。新しいコンテナで私が気に入っている機能の 1 つは、chainer トレーニングジョブのマルチノード配布を容易にする組み込みの chainermn です。
README とサンプルノートブックの両方に、より多くのドキュメントと情報があります。
AWS GreenGrass ML と Chainer
AWS GreenGrass ML には、Intel Atomを搭載したすべてのデバイス、NVIDIA Jetson、TX2、Raspberry Pi 向けに、事前に構築された Chainer パッケージが含まれています。したがって、GreenGrass ML は、TensorFlow、Apache MXNet、Chainer 向けの事前に構築されたパッケージを提供していることになります!SageMaker でモデルをトレーニングし、GreenGrass ML を使用して、GreenGrass 対応デバイスに簡単にデプロイすることができます。
JAWS UG
本日、東京で開催された AWS Summit に出席した JAWS UG の素晴らしい友人達に心から感謝したいと思います。Summit の写真を見て楽しんでいます。日本を、AWS 開発者にとって素晴らしい場所にしてくれてありがとうございます!再び訪問し、お目にかかれることを楽しみにしています。
– Randall |
Summary
Creates a table view from an input table or feature class. The table view that is created by the tool is temporary and will not persist after the session ends unless the document is saved.
Usage
This tool is commonly used to create a table view with a selected set of attributes or fields.
ArcCatalog does not display these table views, but they can be used as inputs to other geoprocessing tools in the current ArcGIS session. Once the ArcGIS application is exited, the table views are deleted.
Table views created in ArcCatalog cannot be used in ArcMap.
If an SQL expression is used but returns nothing, the output will be empty.
Field names can be given a new name by using the Field Info control. The second column on the control lists the existing field names from the input. To rename a field, click the field name and type in a new one.
Field names defined in the Field Info control will be honored in subsequent tools. However, if this tool is the last tool in a model, the field names will be obtained from the source data on disk. To maintain the field names, the new layer has to be written out to a new data using Copy Rows or Copy Features tools.
The field names will be validated by specifying an input workspace. Thus, if the input is a geodatabase feature class, and the output workspace is a folder, the field names may be truncated, since shapefile attributes can only have names of ten characters or less. The new names may be reviewed and altered using the Field Info control.
A subset of fields can be made unavailable in the new layer by using the Field Info control's visible property. The third column in the control provides a dropdown option to specify whether a field will be visible or hidden in the new layer. The default is TRUE. Selecting FALSE will hide that field. You cannot use the hidden fields in a workflow if the newly created layer is input to a subsequent process or tool. If the output is saved to disk, only the fields listed as visible will appear in the new data.
The split policy option on the Field Info control does not apply to this tool.
Syntax
MakeTableView(in_table, out_view, {where_clause}, {workspace}, {field_info})
Parameter Explanation Data Type
in_table
The input table or feature class.
Table View;Raster Layer
out_view
The name of the table view to be created.
Table View ;Raster Layer
where_clause
(Optional)
An SQL expression used to select a subset of features. For more information on SQL syntax see the help topic SQL reference for query expressions used in ArcGIS.
SQL Expression
workspace
(Optional)
The input workspace used to validate the field names. If the input is a geodatabase table and the output workspace is a dBASE table, the field names may be truncated, since dBASE fields can only have names of ten characters or less. The new names may be reviewed and altered using the field information control.
Workspace
field_info
(Optional)
Specifies which fields from the input table to rename and make visible in the output table view.
Field Info
Code sample
MakeTableView example 1 (Python window)
The following Python window script demonstrates how to use the MakeTableView function in immediate mode.
import arcpy arcpy.MakeTableView_management("C:/data/input/crimefrequency.dbf", "crimefreq_tview")
MakeTableView example 2 (stand-alone script)
The following stand-alone script demonstrates how to use MakeTableView with a FieldInfo object to filter fields in the output.
# Name: MakeTableView_Example2.py # Description: Uses a FieldInfo object to select a subset of fields and renaming one field's name. # Import system modules import arcpy # Set data path intable = "C:/data/tables.gdb/crimefreq" # Get the fields from the input fields= arcpy.ListFields(intable) # Create a fieldinfo object fieldinfo = arcpy.FieldInfo() # Iterate through the fields and set them to fieldinfo for field in fields: if field.name == "FREQUENCY": fieldinfo.addField(field.name, "NEWFREQ", "VISIBLE", "") elif field.name == "CRIME_CAT": fieldinfo.addField(field.name, field.name, "HIDDEN", "") elif field.name == "BEAT": fieldinfo.addField(field.name, field.name, "VISIBLE", "") # The created crime_view layer will have fields as set in fieldinfo object arcpy.MakeTableView_management(intable, "crime_view", "", "", fieldinfo) # To persist the layer on disk make a copy of the view arcpy.CopyRows_management("crime_view", "C:/temp/newfreq.dbf")
Environments
Licensing information
Basic: Yes
Standard: Yes
Advanced: Yes |
#
# storage
#
@di.provides(scope="PER_ROOT")
def transaction():
return Transaction()
@di.provides(scope="GLOBAL")
def storage__sqlite_instance(configuration):
return SQLiteInstance.create_or_open(configuration.storage_root_path_sqlite)
@di.provides(scope="PER_ROOT")
def storage__DB(storage__sqlite_instance, transaction):
return SQliteTX(storage__sqlite_instance, transaction)
The idea is that you normally operate in some kind of business transaction or context.
At the beginning of such a transaction (say the request handler for a REST call) you would create a new business transaction (I named it query root):
query_root = di.new_query_root()
db = query_root.get_dependency(“storage__DB”)
|
ХоÑÑ Ð½Ð°Ð¿Ð¸ÑаÑÑ Ð¿ÑогÑÐ°Ð¼Ð¼Ñ ÐºÐ¾ÑоÑÐ°Ñ Ð²ÑÐ²Ð¾Ð´Ð¸Ñ ÑлÑÑайное ÑиÑло. ÐÐ¾Ñ ÐºÐ¾Ð´:
import random
determinant_number_of_letters = random.randint(1,5)
def real_letter(determinant_of_letter):
determinant_of_letter = random.randint(1,10)
adder = []
if determinant_of_letter == 1:
print("a")
if determinant_of_letter == 2:
print("b")
if determinant_of_letter == 3:
print("c")
if determinant_of_letter == 4:
print("d")
if determinant_of_letter == 5:
print("e")
if determinant_of_letter == 6:
print("f")
if determinant_of_letter == 7:
print("g")
if determinant_of_letter == 8:
print("h")
if determinant_of_letter == 9:
print("i")
if determinant_of_letter == 10:
print("j")
if determinant_number_of_letters == 1:
real_letter(random.randint(1,10))
if determinant_number_of_letters == 2:
for i in range(0,2):
real_letter(random.randint(1,10))
if determinant_number_of_letters == 3:
for i in range(0, 3):
real_letter(random.randint(1,10))
if determinant_number_of_letters == 4:
for i in range(0, 4):
real_letter(random.randint(1,10))
if determinant_number_of_letters == 5:
for i in range(0, 5):
real_letter(random.randint(1,10))
Ðа вÑÑ Ð¾Ð´Ðµ пÑогÑамма вÑдаÑÑ Ð¾ÑделÑнÑе ÑÑÑоки ÑÑо Ñо Ñипа:
adcf
Ðне же нÑжно ÑÑо Ð±Ñ Ð¿ÑогÑамма вÑводила Ð¸Ñ Ð² ÑÑÑÐ¾ÐºÑ Ñак:
adcf
ÐодÑкажиÑе как можно ÑеÑиÑÑ ÑÑÑ Ð¿ÑоблемÑ. |
Data
Graph
spektral.data.graph.Graph(x=None, a=None, e=None, y=None)
A container to represent a graph. The data associated with the Graph is stored in its attributes:
x, for the node features;
a, for the adjacency matrix;
e, for the edge attributes;
y, for the node or graph labels;
All of these default to None if you don't specify them in the constructor.If you want to read all non-None attributes at once, you can call thenumpy() method, which will return all data in a tuple (with the orderdefined above).
Graphs also have the following attributes that are computed automatically from the data:
n_nodes: number of nodes;
n_edges: number of edges;
n_node_features: size of the node features, if available;
n_edge_features: size of the edge features, if available;
n_labels: size of the labels, if available;
Any additional kwargs passed to the constructor will be automaticallyassigned as instance attributes of the graph.
Data can be stored in Numpy arrays or Scipy sparse matrices, and labels can also be scalars.
Spektral usually assumes that the different data matrices have specificshapes, although this is not strictly enforced to allow more flexibility.In general, node attributes should have shape (n_nodes, n_node_features) and the adjacencymatrix should have shape (n_nodes, n_nodes).
Edge attributes can be stored in a dense format as arrays of shape(n_nodes, n_nodes, n_edge_features) or in a sparse format as arrays of shape (n_edges, n_edge_features)(so that you don't have to store all the zeros for missing edges). Mostcomponents of Spektral will know how to deal with both situationsautomatically.
Labels can refer to the entire graph (shape (n_labels, )) or to eachindividual node (shape (n_nodes, n_labels)).
Arguments
x: np.array, the node features (shape(n_nodes, n_node_features));
a: np.array or scipy.sparse matrix, the adjacency matrix (shape(n_nodes, n_nodes));
e: np.array, the edge features (shape(n_nodes, n_nodes, n_edge_features)or(n_edges, n_edge_features));
y: np.array, the node or graph labels (shape(n_nodes, n_labels)or(n_labels, ));
Dataset
spektral.data.dataset.Dataset(transforms=None)
A container for Graph objects. This class can be extended to represent a graph dataset.
To create a Dataset, you must implement the Dataset.read() method, whichmust return a list of spektral.data.Graph objects:
class MyDataset(Dataset):
def read(self):
return [Graph(x=x, adj=adj, y=y) for x, adj, y in some_magic_list]
The download() method is automatically called if the path returned byDataset.path does not exists (default ~/.spektral/datasets/ClassName/).
In this case, download() will be called before read().
Datasets should generally behave like Numpy arrays for any operation that uses simple 1D indexing:
>>> dataset[0]
Graph(...)
>>> dataset[[1, 2, 3]]
Dataset(n_graphs=3)
>>> dataset[1:10]
Dataset(n_graphs=9)
>>> np.random.shuffle(dataset) # shuffle in-place
>>> for graph in dataset[:3]:
>>> print(graph)
Graph(...)
Graph(...)
Graph(...)
Datasets have the following properties that are automatically computed:
n_nodes: the number of nodes in the dataset (always None, except in single and mixed mode datasets);
n_node_features: the size of the node features (assumed to be equal for all graphs);
n_edge_features: the size of the edge features (assumed to be equal for all graphs);
n_labels: the size of the labels (assumed to be equal for all graphs); this is computed asy.shape[-1].
Any additional kwargs passed to the constructor will be automaticallyassigned as instance attributes of the dataset.
Datasets also offer three main manipulation functions to apply callables to their graphs:
apply(transform): replaces each graph with the output oftransform(graph). Seespektral.transformsfor some ready-to-use transforms.
Example:apply(spektral.transforms.NormalizeAdj())normalizes the adjacency matrix of each graph in the dataset.
map(transform, reduce=None): returns a list containing the output oftransform(graph)for each graph. Ifreduceis acallable, then returnsreduce(output_list).
Example:map(lambda: g.n_nodes, reduce=np.mean)will return the average number of nodes in the dataset.
filter(function): removes from the dataset any graph for whichfunction(graph) is False.
Example:filter(lambda: g.n_nodes < 100)removes from the dataset all graphs bigger than 100 nodes.
Datasets in mixed mode (one adjacency matrix, many instances of node features)are expected to have a particular structure.The graphs returned by read() should not have an adjacency matrix,which should be instead stored as a singleton in the dataset's a attribute.For example:
class MyMixedModeDataset(Dataset):
def read(self):
self.a = compute_adjacency_matrix()
return [Graph(x=x, y=y) for x, y in some_magic_list]
Have a look at the spektral.datasets module for examples of populardatasets already implemented.
Arguments
transforms: a callable or list of callables that are automatically applied to the graphs after loading the dataset.
Data utils
to_disjoint
spektral.data.utils.to_disjoint(x_list=None, a_list=None, e_list=None)
Converts lists of node features, adjacency matrices and edge features to disjoint mode.
Either the node features or the adjacency matrices must be provided as input.
The i-th element of each list must be associated with the i-th graph.
The method also computes the batch index to retrieve individual graphs from the disjoint union.
Edge attributes can be represented as:
a dense array of shape (n_nodes, n_nodes, n_edge_features);
a sparse edge list of shape (n_edges, n_edge_features);
and they will always be returned as a stacked edge list.
Arguments
x_list: a list of np.arrays of shape(n_nodes, n_node_features)-- note thatn_nodescan change between graphs;
a_list: a list of np.arrays or scipy.sparse matrices of shape(n_nodes, n_nodes);
e_list: a list of np.arrays of shape(n_nodes, n_nodes, n_edge_features)or(n_edges, n_edge_features);
Return
Only if the corresponding list is given as input:
x: np.array of shape(n_nodes, n_node_features);
a: scipy.sparse matrix of shape(n_nodes, n_nodes);
e: np.array of shape(n_edges, n_edge_features);
i: np.array of shape(n_nodes, );
to_batch
spektral.data.utils.to_batch(x_list=None, a_list=None, e_list=None)
Converts lists of node features, adjacency matrices and edge features tobatch mode,by zero-padding all tensors to have the same node dimension n_max.
Either the node features or the adjacency matrices must be provided as input.
The i-th element of each list must be associated with the i-th graph.
If a_list contains sparse matrices, they will be converted to densenp.arrays.
The edge attributes of a graph can be represented as
a dense array of shape (n_nodes, n_nodes, n_edge_features);
a sparse edge list of shape (n_edges, n_edge_features);
and they will always be returned as dense arrays.
Arguments
x_list: a list of np.arrays of shape(n_nodes, n_node_features)-- note thatn_nodescan change between graphs;
a_list: a list of np.arrays or scipy.sparse matrices of shape(n_nodes, n_nodes);
e_list: a list of np.arrays of shape(n_nodes, n_nodes, n_edge_features)or(n_edges, n_edge_features);
Return
Only if the corresponding list is given as input:
x: np.array of shape(batch, n_max, n_node_features);
a: np.array of shape(batch, n_max, n_max);
e: np.array of shape(batch, n_max, n_max, n_edge_features);
to_mixed
spektral.data.utils.to_mixed(x_list=None, a=None, e_list=None)
Converts lists of node features and edge features to mixed mode.
The adjacency matrix must be passed as a singleton, i.e., a single np.array or scipy.sparse matrix shared by all graphs.
Edge attributes can be represented as:
a dense array of shape (n_nodes, n_nodes, n_edge_features);
a sparse edge list of shape (n_edges, n_edge_features);
and they will always be returned as a batch of edge lists.
Arguments
x_list: a list of np.arrays of shape(n_nodes, n_node_features)-- note thatn_nodesmust be the same between graphs;
a: a np.array or scipy.sparse matrix of shape(n_nodes, n_nodes);
e_list: a list of np.arrays of shape(n_nodes, n_nodes, n_edge_features)or(n_edges, n_edge_features);
Return
Only if the corresponding element is given as input:
x: np.array of shape(batch, n_nodes, n_node_features);
a: scipy.sparse matrix of shape(n_nodes, n_nodes);
e: np.array of shape(batch, n_edges, n_edge_features);
batch_generator
spektral.data.utils.batch_generator(data, batch_size=32, epochs=None, shuffle=True)
Iterates over the data for the given number of epochs, yielding batches ofsize batch_size.
Arguments
data: np.array or list of np.arrays with the same first dimension;
batch_size: number of samples in a batch;
epochs: number of times to iterate over the data (default None, iterates indefinitely);
shuffle: whether to shuffle the data at the beginning of each epoch
Return
Batches of size batch_size.
to_tf_signature
spektral.data.utils.to_tf_signature(signature)
Converts a Dataset signature to a TensorFlow signature.
Arguments
signature: a Dataset signature.
Return
A TensorFlow signature. |
Why
While I do use sqlalchemy and to some extent peewee for my projects, I slowly got tired of having to relearn how to write SQL when I’ve known SQL since the mid-90’s.
DCDB’s design is also aiming for simplicity and minimal behind the scenes automagical behaviors. Instead complexity should be added voluntarily and in such a way that it can be traced back.
Example
import dataclasses as dcs
import dcdb
@dcs.dataclass()
class Foo:
name:str
age:int
db = dcdb.DBConnection(":memory:") # alternatively this can be a file path
db.bind(Foo)
"""
Bind doesn't change Foo in the local scope but instead
it creates a new class DCDB_Foo which is stored to the DBConnection in it's
table registry.
Behind the scenes, a table `Foo` is created to the connected database. No changes to the name are made (eg pluralization). How you wrote your bound dataclasses is almost exactly how it is stored in the sqlite database.
An exception is that a .id instance property along with DB methods like: update/save, Create, Get, and Select are added to the class definition.
"""
record = db.t.Foo(name="Bob", age="44")
assert record.name == "Bob"
same_record = db.t.Foo.Get("name=?", "Bob")
assert record.age == 44
assert record.id == same_record.id
record.age = 32
record.save()
same_record = db.t.Foo.Get("age=?", 32)
assert record.id == same_record.id
assert same_record.age == 32
same_record.delete()
"""
Note it is important to notice that currently same_record and
record have the same .id # property but they are different
instances and copies of the same record with no shared reference.
Changes to one copy will not reflect with the other.
"""
Github DCDB |
El problema es que en algunas vistas, obtengo manualmente una variable de contexto (digamos “G”) de interés, ya que la uso para buscar otra información en esa vista particular (ieviews A, B, C), pero en otras vistas ( es decir, X, Y, Z), necesito obtener esa variable de contexto particular, ya que este contexto debe estar disponible en todas las vistas de mi proyecto (ya que mi plantilla base usa la variable de contexto). El problema con el uso de un procesador de contexto personalizado es que creo que hará una llamada de DB adicional e IDÉNTICA en las vistas (A, B, C), ya que esas vistas ya están recibiendo esa variable de contexto, ya que es necesario para obtener otros datos en la vista. Lo que pensaba era que tal vez podría implementar un procesador de contexto que verifique si esa variable de contexto específica está configurada para una solicitud determinada. es posible? ¿Hay una solución más fácil? El código a continuación puede aclarar el problema para algunas personas.
¡Gracias por cualquier consejo!
def viewA(request): g=G.objects.get(user=request.user) posts = Post.objects.filter(g=g) return direct_to_template(request,'something.html',{'G':g, 'posts':posts}) def viewX(request): stuff = Albums.objects.get(user=request.user) return direct_to_template(request,'something2.html',{'stuff':stuff}) def my_context_processor(request): #redundant in case of viewA (hits db again?) return {'G':G.objects.get(user=request.user)} def ideal_processor(request): #check context vars to see if G is already in there #if it is, return {}, else, return {'G':G.objects.get(user=request.user)}
def always_G(request): if not hasattr(request, 'G'): {'G':G.objects.get(user=request.user)}
Acabo de hacer middleware que configura el variabel G para que solicite. Ya que lo necesito en prácticamente todas las solicitudes de todos modos. es decir:
class GuildMiddleware(object): def process_request(self, request): request.G = figure_out_what_G_is() return None
Ahora puede usar request.G en cualquier lugar de sus vistas (y plantillas si está usando direct_to_template, RequestContext, etc.). |
```(aka backtick or grave accent) in a single line before and after the block. See: http://commonmark.org/help/
Backtrader Multi-processing Issue.
Hi BT community,
I am a huge fan of BT and using the platform from last 6 months.
Recently I tried strategy optimisation using Muti-processors and It worked fine for smaller range of backtest.
It works fine for around 100 counts but after it stops working.
Starting Backtest
Starting optimisation
Killed
(base) aadhunik@aadhunik:~/Desktop$ Process ForkPoolWorker-3:
Traceback (most recent call last):
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/pool.py", line 127, in worker
put((job, i, result))
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/queues.py", line 364, in put
self._writer.send_bytes(obj)
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/connection.py", line 200, in send_bytes
self._send_bytes(m[offset:offset + size])
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/connection.py", line 397, in _send_bytes
self._send(header)
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/pool.py", line 132, in worker
put((job, i, (False, wrapped)))
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/queues.py", line 364, in put
self._writer.send_bytes(obj)
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/connection.py", line 200, in send_bytes
self._send_bytes(m[offset:offset + size])
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/connection.py", line 404, in _send_bytes
self._send(header + buf)
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe
Process ForkPoolWorker-2:
Traceback (most recent call last):
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/pool.py", line 127, in worker
put((job, i, result))
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/queues.py", line 364, in put
self._writer.send_bytes(obj)
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/connection.py", line 200, in send_bytes
self._send_bytes(m[offset:offset + size])
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/connection.py", line 397, in _send_bytes
self._send(header)
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/pool.py", line 132, in worker
put((job, i, (False, wrapped)))
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/queues.py", line 364, in put
self._writer.send_bytes(obj)
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/connection.py", line 200, in send_bytes
self._send_bytes(m[offset:offset + size])
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/connection.py", line 404, in _send_bytes
self._send(header + buf)
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe
Process ForkPoolWorker-1:
Traceback (most recent call last):
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/pool.py", line 127, in worker
put((job, i, result))
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/queues.py", line 364, in put
self._writer.send_bytes(obj)
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/connection.py", line 200, in send_bytes
self._send_bytes(m[offset:offset + size])
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/connection.py", line 397, in _send_bytes
self._send(header)
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/pool.py", line 132, in worker
put((job, i, (False, wrapped)))
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/queues.py", line 364, in put
self._writer.send_bytes(obj)
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/connection.py", line 200, in send_bytes
self._send_bytes(m[offset:offset + size])
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/connection.py", line 404, in _send_bytes
self._send(header + buf)
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe
I have a 12 core I7 processor and Initially I though it can be resolved by lowering the cpu counts but it don't work even for
cerebro = Cerebro(optreturn=False, maxcpus=2)
cerebro.optstrategy(
testStrategy,
fast=8,
slow=range(9, 21),
dcperiod=range(10, 31),
trperiod=12,
volumep=range(5, 20),
)
cerebro = Cerebro(optreturn=False, maxcpus=2)
As long as you use more than
1CPU it is obvious that the problem is going to show up.
File "/home/aadhunik/anaconda3/lib/python3.7/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe
No need to quote the entire chain of exceptions. All messages in the error trace point at the internals Python, in multiprocessing and specifically to pipe communication.
This isn't even a
pickleproblem in which some of the Python objects cannot be conveyed over the pipes communicating processes. Here the pipe is broken and that usually means (not only in Python, in all multi-process environments)
THE PROCESS HAS DIED
We are not in your code, but you can choose from these reasons (many other do exist)
Run out of memory ... and the process died
A non-recoverable exception happened ... and the process died
A non-handled exception ... and the process died
...
The problem is for sure not related to the number of CPUs.
@backtrader
Thank you for helping me out in this.
Actual problem is memory usage as you said.
The memory usage keep rising while multiprocessing.
Although this is not an issue in Backtrader, I would like to know your suggestion on how to deal with this.
From my point of view there could be 2 solutions for these,
1st ) Increasing Swap
2nd ) Re-run the optimisation program every time after n number of simulations(200 in this case).
You probably want to run less simulations each time.
@backtrader
Thanks for your help but I found out that even for 1 cpu the memory usage keeps increasing.
I think it keeps adding data in each and every step of simulation. |
Estoy haciendo un juego de carreras de coches de arriba abajo y quiero hacer que el coche gire cuando presionas las teclas izquierda y derecha (ya he hecho esa parte), la rotación del sprite se almacena en una variable como grados. Me gustaría poder moverlo de acuerdo con la aceleración en la dirección hacia la que está orientado. Yo mismo puedo entender la parte de la aceleración, es solo averiguar qué píxel está exactamente en esa dirección. ¿Alguien puede darme un código simple para ayudar con esto?
Aquí están los contenidos de la clase que son relevantes:
def __init__(self, groups): super(Car, self).__init__(groups) self.originalImage = pygame.image.load(os.path.join("Data", "Images", "Car.png")) #TODO Make dynamic self.originalImage.set_colorkey((0,255,0)) self.image = self.originalImage.copy() # The variable that is changed whenever the car is rotated. self.originalRect = self.originalImage.get_rect() # This rect is ONLY for width and height, the x and y NEVER change from 0! self.rect = self.originalRect.copy() # This is the rect used to represent the actual rect of the image, it is used for the x and y of the image that is blitted. self.velocity = 0 # Current velocity in pixels per second self.acceleration = 1 # Pixels per second (Also applies as so called deceleration AKA friction) self.topSpeed = 30 # Max speed in pixels per second self.rotation = 0 # In degrees self.turnRate = 5 # In degrees per second self.moving = 0 # If 1: moving forward, if 0: stopping, if -1: moving backward self.centerRect = None def update(self, lastFrame): if self.rotation >= 360: self.rotation = 0 elif self.rotation 0: self.image = pygame.transform.rotate(self.originalImage.copy(), self.rotation) self.rect.size = self.image.get_rect().size self.center() # Attempt to center on the last used rect if self.moving == 1: self.velocity += self.acceleration #TODO make time based if self.velocity > self.topSpeed: self.velocity = self.topSpeed # Cap the velocity
# cos and sin require radians x = cos(radians) * offset y = sin(radians) * offset
Se utiliza la velocidad para el desplazamiento . (Esto significa que una velocidad negativa conducirá hacia atrás).
asi que:
def rad_to_offset(radians, offset): # insert better func name. x = cos(radians) * offset y = sin(radians) * offset return [x, y]
loop_update es algo como:
# vel += accel # pos += rad_to_offset( self.rotation, vel )
Almacenar rotaciones como radianes es más simple. Si desea definir la velocidad / etc como grados, todavía puede.
# store radians, but define as degrees car.rotation_accel = radians(45) car.rotation_max_accel = radians(90)
Realmente no puedo hacerlo mejor que indicarte este tutorial (*). En particular, la primera parte explica cómo hacer la rotación y hacer que los sprites se muevan en ciertas direcciones.
(*) Enchufe descarado 🙂 pero muy relevante a la pregunta. |
bug
in count_adjacent_islands, number_of_islands = 0 should be number_of_countries = 0
mutate original argument
Most of the time, it's a bad idea to change any of the arguments to a function unless explicitly expected. So you better take a copy of the matrix first:
matrix_copy = [row[:] for row in matrix]
tuple unpacking
instead of for shift in ((-1,0), (1,0), (0,-1), (0,1)):, you can do for dx, dy in ((-1, 0), (1, 0), (0, -1), (0, 1)):, then row, col = [x+y for x,y in zip((this_row, this_col), shift)] can be expressed a lot clearer: row, col = x + dx, y + dy
continue
instead of keep nesting if conditions, you can break out of that iteration earlier if the conditions are not fulfilled
for row_index, row in enumerate(matrix):
for column_index, _ in enumerate(row):
if matrix[row_index][column_index] != 0:
number_of_islands += 1
clean_neighbours(matrix, row_index, column_index)
can become:
for row_index, row in enumerate(matrix_copy):
for column_index, _ in enumerate(row):
if matrix_copy[row_index][column_index] == 0:
continue
number_of_islands += 1
clean_neighbours2(matrix_copy, row_index, column_index)
saving 1 level of indentation on the code that actually does the lifting. This is not much in this particular case, but with larger nested conditions, this can make things a lot clearer, and save a lot of horizontal screen estate
recursion
If there are some larger islands, you will run into the recursion limit. Better would be to transform this to a queue and a loop
from collections import deque
def clean_neighbours2(matrix, x, y):
cell_value = matrix[x][y]
if cell_value == 0:
return
matrix[x][y] = 0
queue = deque([(x,y)])
while queue:
x, y = queue.pop()
for dx, dy in ((-1, 0), (1, 0), (0, -1), (0, 1)):
row, col = x + dx, y + dy
if (
0 <= row < len(matrix)
and 0 <= col < len(matrix[0])
and not matrix[row][col] == 0
):
continue
if matrix[row][col] == cell_value:
queue.append((row, col))
matrix[row][col] = 0
def count_adjacent_islands2(matrix):
matrix_copy = [row[:] for row in matrix]
number_of_islands = 0
for row_index, row in enumerate(matrix_copy):
for column_index, _ in enumerate(row):
if matrix_copy[row_index][column_index] == 0:
continue
number_of_islands += 1
clean_neighbours2(matrix_copy, row_index, column_index)
return number_of_islands
For the sample data you provided, this code took 3s compared to 4s for the original on my machine
alternative approach
Using numba and numpy, and a slight rewrite to accomodate for numba compatibilities:
from numba import jit
import numpy as np
@jit()
def clean_neighbours_jit(matrix, x, y):
cell_value = matrix[x, y]
if cell_value == 0:
return
matrix[x, y] = 0
queue = [(x, y)]
row_length, column_length = matrix.shape
while queue:
x, y = queue.pop()
for dx, dy in ((-1, 0), (1, 0), (0, -1), (0, 1)):
row, col = x + dx, y + dy
if (
not 0 <= row < row_length
or not 0 <= col < column_length
or matrix[row, col] != cell_value
):
continue
queue.append((row, col))
matrix[row, col] = 0
@jit()
def count_adjacent_islands_jit(matrix):
matrix_copy = matrix.copy()
number_of_islands = 0
row_length, column_length = matrix_copy.shape
for row_index in range(row_length):
for column_index in range(column_length):
if matrix_copy[row_index, column_index] == 0:
continue
number_of_islands += 1
clean_neighbours_jit(matrix_copy, row_index, column_index)
return number_of_islands
This expects a numpy array as matrix, (for example: count_adjacent_islands_jit(np.array(A))) but does the job in about 200 to 300ms, (about 80ms spent on converting A to an np.array), so more than 10x speedup. |
Autocomplete Component For Vue.js – vue-simple-suggest
vue-simple-suggest is a simple yet feature-rich autocomplete component for Vue.js app.
vue-simple-suggest
Simple yet feature-rich autocomplete component for Vue.js
Install
npm install --save vue-simple-suggest
See installation guide for more options.
Table of contents
vue-simple-suggest
Install
Table of contents
What is it
Simple example
Installation
Build Setup
Default Controls
Component API
What is it
This is a simple yet feature-rich suggestion/autocomplete component for Vue.js.
Actually, it's so feature rich, that it's possible to do crazy stuff with it, like
Imitating drop-downs and drop-down menus
Turn suggestions list into an actual suggestions table
Work with ANY type of custom input passed (like type=button, radio and etc.)
... And many more things
And, as a bonus, it is very light.
Features
v-modelsupport.
Switching v-modeltype (select/input).
Custom input element through default slot.
Custom list items through named scoped slots.
All HTML5-valid props for default input element are provided (type,tabindexand etc...).
Customizable keyboard controls.
Rich and simple API.
CSS classes for quick and easy restyling.
Many build variants to choose from.
Flexible and customizable component design.
Optional polyfills for IE importable from the lib.
All of the props, events and slots are OPTIONAL for this component, so it can be used without any configuration at all.
New features?
If you feel that something important is missing (or found a bug) - feel free to create an issue. :)
Simple example
To use the component just install via NPM:
npm install --save vue-simple-suggest
Then, in your Vue.js component/page:
<!-- Some component.vue -->
<template>
<vue-simple-suggest
v-model="chosen"
:list="simpleSuggestionList"
:filter-by-query="true">
<!-- Filter by input text to only show the matching results -->
</vue-simple-suggest>
<br>
<p>Chosen element: {{ chosen }}</p>
</template>
<script>
import VueSimpleSuggest from 'vue-simple-suggest'
import 'vue-simple-suggest/dist/styles.css' // Optional CSS
export default {
components: {
VueSimpleSuggest
},
data() {
return {
chosen: ''
}
},
methods: {
simpleSuggestionList() {
return [
'Vue.js',
'React.js',
'Angular.js'
]
}
}
}
</script>
Installation
NPM
npm install --save vue-simple-suggest
# or
yarn add vue-simple-suggest
Unpkg
If including via this method - the component will automatically install itself.
<!-- UMD Component, async/await polyfills through promises -->
<script type="text/javascript" src="https://unpkg.com/vue-simple-suggest"></script>
<script type="text/javascript" src="https://unpkg.com/[email protected]"></script>
<!-- Specific version -->
<!-- CSS -->
<link rel="stylesheet" href="https://unpkg.com/vue-simple-suggest/dist/styles.css">
<!-- If you need polyfills, use IIFE verision below -->
<!-- IIFE build includes ALL polyfills: Object.assign, Promises, Generators, Async/Await! -->
<script type="text/javascript" src="https://unpkg.com/vue-simple-suggest/dist/iife.js"></script>
Importing
/// ESNext (original code, no pollyfills, single-file .vue component, css included)
import VueSimpleSuggest from 'vue-simple-suggest/lib'
///
/// ES6 (async polyfills)
import VueSimpleSuggest from 'vue-simple-suggest'
// or, if you have problems importing:
import VueSimpleSuggest from 'vue-simple-suggest/dist/es6'
///
/// ES7 and above (no polyfills)
import VueSimpleSuggest from 'vue-simple-suggest/dist/es7'
///
/// CommonJS (async, Object.assign and promises are polyfilled)
const VueSimpleSuggest = require('vue-simple-suggest')
// or, if you have problems importing:
const VueSimpleSuggest = require('vue-simple-suggest/dist/cjs')
///
// Optional - import css separately with css loaders:
import 'vue-simple-suggest/dist/styles.css'
Polyfills
New in
v1.8.3
vue-simple-suggest comes with minimal optional polyfills for IE11+ - Object.assign, Element.prototype.closest and Element.prototype.matches.You can import them like this:
import 'vue-simple-suggest/lib/polyfills';
// or
require('vue-simple-suggest/lib/polyfills');
Usage
Globaly:
// You don't need to do it, if including via <script> (umd, iife)
Vue.component('vue-simple-suggest', VueSimpleSuggest)
In single-file .vue components:
<script>
import VueSimpleSuggest from 'vue-simple-suggest'
import 'vue-simple-suggest/dist/styles.css' // Using a css-loader
export default {
components: {
VueSimpleSuggest
}
}
</script>
Build Setup
# clone the repo
git clone https://github.com/KazanExpress/vue-simple-suggest.git
cd ./vue-simple-suggest
# install dependencies
npm install
# serve example with hot reload at localhost
npm run dev
# build example & readme for static serving
npm run docs
Default Controls
New in v1.2.0
These are default keyboard shortcuts.
Can be customized with the controls prop. All fields in this controls object are optional.
Default scheme:
Key (key code) Description
Escape (27) If the suggestions list is shown - hide it. Defined by hideList property.
ArrowDown (40) If the suggestions list is hidden - show it. Defined by selectionDown property.
ArrowUp (38) / ArrowDown (40) Cycle (hover) through suggestions. Defined by selectionUp/selectionDown properties respectfully.
Enter (13) If the list is shown - chooses the highlighted element, if the list is hidden - refills the suggestions based on current input text. Defined by select property.
(Ctrl/Shift) + Space (32) Select the first element in the list. Defined by autocomplete property. Works with Ctrl modifier key or Shift modifier key.
(Ctrl/Shift) + Enter (13) Same as previous, but also hides the suggestions list.
JS object:
{
selectionUp: [38],
selectionDown: [40],
select: [13],
hideList: [27],
autocomplete: [32, 13]
}
Component API
TLDR
Click to expand
<!-- Ref to access the API, v-model for efficient query binding -->
<vue-simple-suggest ref="vueSimpleSuggest" v-model="model"
value-attribute="id"
display-attribute="title"
mode="input"
:placeholder="placeholder!!!"
:list="getListFunction"
:max-suggestions="10"
:min-length="3"
:debounce="100"
:destyled="false"
:remove-list="false"
:filter-by-query="false"
:prevent-submit="true"
:filter="customFilterFunction"
:value="defaultValue"
:controls="{
selectionUp: [38, 33],
selectionDown: [40, 34],
select: [13, 36],
hideList: [27, 35],
autocomplete: [32, 13],
}"
@input="onInputEvent"
@select="onSuggestSelect"
@hover="onSuggestHover"
@focus="onFocus"
@blur="onBlur"
@request-start="onRequestStart"
@request-done="onRequestDone"
@request-failed="onRequestFailed"
@show-list="onShowList"
@hide-list="onHideList"
>
<!-- v-model on input itself is useless -->
<input class="optional-custom-input">
<!-- Appears o top of the list -->
<template slot="misc-item-above" slot-scope="{ suggestions, query }">
<div class="misc-item">
<span>You're searching for {{ query }}.</span>
</div>
<div class="misc-item">
<span>{{ suggestions.length }} suggestions are shown...</span>
</div>
<hr>
</template>
<div slot="suggestion-item" slot-scope="{ suggestion }" class="custom">{{ suggestion.title }}</div>
<!-- Appears below the list -->
<div class="misc-item" slot="misc-item-below" slot-scope="{ suggestions }" v-if="loading">
<span>Loading...</span>
</div>
</vue-simple-suggest>
CSS class structure
If there's a need to customize the appearance of the component, here's the internal class-structure:
// .designed is applied only if `destyled` prop is false.
.vue-simple-suggest.designed.focus // .focus is applied whenever the component is focused.
.input-wrapper
.default-input // Replaced with your custom input if default slot is provided.
.suggestions // Also has transition classes, see below.
.suggest-item
If you wish to use your existing classes, or frameworks like Bootstrap you can inject your own classes using the 'styles' prop:
<!-- Some component.vue -->
<template>
<vue-simple-suggest
v-model="chosen"
:list="simpleSuggestionList"
:styles="autoCompleteStyle"
:destyled=true
:filter-by-query="true">
</vue-simple-suggest>
</template>
<script>
...
export default {
...
data() {
return {
autoCompleteStyle : {
vueSimpleSuggest: "position-relative",
inputWrapper: "",
defaultInput : "form-control",
suggestions: "position-absolute list-group z-1000",
suggestItem: "list-group-item"
}
}
},
...
}
</script>`
<style lang="scss">
.z-1000 {
z-index: 1000;
}
.hover {
background-color: #007bff;
color: #fff;
}
</style>
Transitions
New in v1.8.0
You can also define custom list transitions by defining css rules for the transition named vue-simple-suggest on the .suggestions div:
.suggestions {
/* It's improtant to have a cpecific pivot point for transition:*/
opacity: 1;
}
.vue-simple-suggest-enter-active.suggestions,
.vue-simple-suggest-leave-active.suggestions {
/* Transition length here:*/
transition: opacity .2s;
}
.vue-simple-suggest-enter.suggestions,
.vue-simple-suggest-leave-to.suggestions {
/* Transition rule for "disengaged" element:*/
opacity: 0;
}
API definitions
Props
Name Type Default Description
controls v1.2.0 Object See default controls Determines the keyboard shortcuts in key-codes (for browser-compatibility purposes). Arrays provide the ability to assign multiple keys to one action. Consists of 5 array fields: selectionUp, selectionDown, select, hideList and autocomplete, all of which are optional.
max-suggestions Number 10 The maximum amount of suggestions to display. Set to 0 for infinite suggestions.
min-length Number 3 The minimum amount of symbols in input to trigger suggestion list. vue-simple-suggest starts behaving as a dropdown menu, if the value is 0.
display-attribute String 'title' The property in a suggestion object to display in a list. Supports dotted paths.
value-attribute String 'id' The property in a suggestion object to use as a unique key. Supports dotted paths.
list Funciton or Array () => [] The array provider function, must accept a query as its only argument. Can return an array or a promise. Can be async. The component behaves as a simple input without this function.
debounce Number 0 Determines the list debounce (a time between the input event and a function execution).
destyled Boolean false Whether to cancel the default styling of input and suggestions list.
styles v1.8.0 Object {} CSS classes to attach with current component style.
remove-list Boolean false If true - the suggestion list will be always hidden.
filter-by-query Boolean false Whether to filter the resulting suggestions by input's text query (make it a search component).
filter Function - A custom function for filtering the suggestion results that accepts a single item and a query to filter by as its 2 arguments. Used only if filter-by-query is set to true.
mode v1.4.0 String 'input' The v-model event. Determines the event, that triggers v-model. Can be one of 'input' (v-model binds a displayed property) or 'select' (v-model binds a selected item).
type, value, pattern, etc... All of the HTML5 input attributes with their respected default values.
prevent-submit v1.8.1 Boolean true Whether to prevent form submitting when Enter key is pressed.
mode
New in v1.4.0
Determines the event, that triggers v-model. Can be one of 'input' (default) or 'select'.
For example, if 'input' is chosen - then v-model will update the value each time an input event is fired, setting the input's string.
Same is for 'select' - v-model changes only when something is selected from the list, setting the selected value (string, object or whatever) to its argument.
A proper use-case for it being when one wants to use the component only for selection binding and custom input for text binding:
<vue-simple-suggest v-model="selected" mode="select">
<input v-model="text">
</vue-simple-suggest>
Emitted Events
Name Arguments Description
input HTML input event An outward projection of the current input's event.
focus HTML focus event An outward projection of the current input's event.
blur HTML focus event An outward projection of the current input's event.
select Selected suggestion Fires on suggestion selection (via a mouse click or enter keypress).
hover Hovered suggestion Fires each time a new suggestion is highlighted (via a cursor movement or keyboard arrows).
suggestion-click Selected suggestion, HTML click event Fires on suggestion element click.
show-list - Fires each time the suggestion list is toggled to be shown.
hide-list - Fires each time the suggestion list is being hidden.
request-start Current input value (query) Fires each time a list function starts executing.
request-done Resulting suggestions list Fires when a list function successfully returns a result and forwards that result as an argument.
request-failed The interrrupting exception Fires if an exception occurs during the execution of a list funciton.
Ref Methods
accessed via
$refs.*your ref name here*
Name Arguments Description
showList - Shows the suggestion list. Emits the respected event.
hideList - Hides the suggestion list. Emits the respected event.
getSuggestions query: string Gets and processes suggestions from the list prop. Returns a promise. Emits the requestStart, requestDone and requestFailed events.
research - Debounced getSuggestions on the current input value.
clearSuggestions - Clears the suggestions array.
select suggestion Selects the passed suggestion. Emits the respected event.
hover suggestion Hovers over the passed suggestion. Emits the respected event.
displayProperty suggestion Returns the displayed property of a suggestion.
valueProperty suggestion Returns the value property of a suggestion.
Ref Event Handlers
accessed via
$refs.*your ref name here*
You can use these to imitate some of the component's behaviours.
Name Arguments Description
showSuggestions Alias for onInputClick. Will replace it in the future releases
onInput HTML input event Fires whenever the input text is changed. Emits the input event.
onFocus HTML focus event Fires whenever the input comes into focus, emits the focus event.
onBlur HTML focus event Antonym to onFocus.
onAutocomplete - Fires when the autocomplete keyboard shortcut is pressed. Selects the first suggestion.
onListKeyUp HTML keyup event Fires on component keyup. Internally used for hiding the list.
moveSelection Alias for onArrowKeyDown. Will replace it in the future releases.
Ref Data
accessed via
$refs.*your ref name here*
Name Default Description
selected null Currently selected element.
hovered null Currently hovered element.
suggestions [] Current suggestions list.
listShown false Is suggestion list shown.
inputElement null Currently used HTMLInputElement.
canSend true Whether the assigned getListFuncion can be executed.
timeoutInstance null The timeout until next getListFunction execution.
text $props.value Current input text.
slotIsComponent false Whether this current custom input is a vue-component.
listIsRequest - Whether the list prop is a function.
input - A ref to the current input (component or vanilla).
hoveredIndex - The current hovered element index.
controlScheme Default Controls The current controls scheme.
isPlainSuggestion false Whether the current suggestions list consists of plain strings (not objects).
isClicking false true if the user currently clicks.
isOverList false true if the user currently hovers over suggestions list.
isInFocus false true if the component is currently in focus.
isTabbed false true if the user pressed tab, while the component is in focus.
Slots
Custom input
default slot (optional)
Supports nesting. Input props can be passed to a custom input to avoid their processing by vue-simple-suggest. Defaults to a simple input with props passed to vue-simple-suggest.
Warning: v-model on a custom input IS NOT the same as v-model on vue-simple-suggest!
<!-- Default HTMLInputElement example: -->
<vue-simple-suggest v-model="model" placeholder="Text here" type="search" pattern="[a-z]+"/>
<!-- Vanilla HTMLInputElement example 1: -->
<vue-simple-suggest>
<input pattern="[a-z]+">
</vue-simple-suggest>
<!-- Vanilla HTMLInputElement example 2: -->
<vue-simple-suggest v-model="model" placeholder="Text here" type="search">
</vue-simple-suggest>
<!-- Vanilla HTMLInputElement example 3 (fully equivalent to the second example): -->
<vue-simple-suggest v-model="model">
<input placeholder="Text here" type="search">
</vue-simple-suggest>
<!-- Vanilla HTMLInputElement example 4 (nested): -->
<vue-simple-suggest v-model="model">
<div>
<section>
<input type="email">
</section>
</div>
</vue-simple-suggest>
<!-- Vue component example (also supports nesting): -->
<vue-simple-suggest v-model="vModelGoesHere">
<my-custom-input-somponent :value="initialInputValueGoesHere"></my-custom-input-somponent>
</vue-simple-suggest>
Custom input component caveats:
To work with the vue-simple-suggest your custom input component has to follow certain standard behaviours.
Custom input component must (in order to work properly):
Emit an inputevent.
Emit focusandblurevents.
Have a valueprop.
Custom input component should (in order to avoid usage limitations):
Not stop any event propagations from internal input HTML element.
Forward an original event argument with focusandblurevents.
If vue-simple-suggest with your custom component doesn't seem to react to outside variable changes - bind both components' v-model to the same variable, like so:
<vue-simple-suggest v-model="model">
<my-custom-input-somponent v-model="model"></my-custom-input-somponent>
</vue-simple-suggest>
Custom suggestion item
suggestion-itemslot (optional)
Description
Allows custom html-definitons of the suggestion items in a list.Defaults to <span>{{ displayAttribute(suggestion) }}</span>
Accepts the suggestion object and a query text as a slot-scope attribute values.
<!-- Example: -->
<vue-simple-suggest>
<div slot="suggestion-item" slot-scope="{ suggestion, query }">
<div>{{ suggestion.title }} by {{ suggestion.author }}</div>
</div>
</vue-simple-suggest>
Custom buttons inside of suggestion items
If you want to add some action buttons to the suggetion items, just use the .stop directive modifier to prevent the default suggestion-click:
<!-- Example: -->
<vue-simple-suggest>
<div slot="suggestion-item" slot-scope="{ suggestion, query }">
<span>{{ suggestion.title }} by {{ suggestion.author }}</span>
<button @click.stop="remove(suggestion)">remove from list</button>
<button @click.stop="like(suggestion)">add to favorites</button>
</div>
</vue-simple-suggest>
In this case, the buttons will ONLY execute the bound method and will not select the suggested item.
Manual autocomplete
If there's a need to autocomplete the suggestion in the input instead of selecting it, you can use the autocomplete() function in the slot's scope:
<!-- Example: -->
<vue-simple-suggest>
<div slot="suggestion-item" slot-scope="{ suggestion, autocomplete }">
<span>{{ suggestion.title }} by {{ suggestion.author }}</span>
<button @click.stop="autocomplete()">Complete input</button>
</div>
</vue-simple-suggest>
Ref Data
In cooperation with ref fields this slot can be used to drastically transform the suggestion list behaviour.
One of the simplest examples - highlighting the query text in the results:
<div slot="suggestion-item" slot-scope="scope">
<span v-html="boldenSuggestion(scope)"></span>
</div>
boldenSuggestion(scope) {
if (!scope) return scope;
const { suggestion, query } = scope;
let result = this.$refs.suggestComponent.displayProperty(suggestion);
if (!query) return result;
const texts = query.split(/[\s-_/\\|\.]/gm).filter(t => !!t) || [''];
return result.replace(new RegExp('(.*?)(' + texts.join('|') + ')(.*?)','gi'), '$1<b>$2</b>$3');
}
Result via Google Books search API:
Custom miscellanious item slots
misc-item-aboveandmisc-item-belowslots (optional)
Allow custom elements to be shown in suggestion list. These elements never dissapear from the list, niether can they be selected nor hovered on.
These can be used for decoration, loaders, error messages and etc.
Do not have defaults, so are not shown until defined.
Accept the suggestions array and a query text as a slot-scope attribute values.
<!-- Examples: -->
<vue-simple-suggest>
<template slot="misc-item-above" slot-scope="{ suggestions, query }">
<div class="misc-item">
<span>You're searching for {{ query }}.</span>
</div>
<div class="misc-item">
<span>{{ suggestions.length }} suggestions are shown...</span>
</div>
</template>
<div slot="misc-item-below" slot-scope="{ suggestions }" v-if="isLoading" class="misc-item">
<span>Loading...</span>
</div>
</vue-simple-suggest>
These slots can also be used to handle empty results, like this:
<!-- Main slot template -->
<template slot="misc-item-above" slot-scope="{ suggestions, query }">
<div class="misc-item">
<span>You're searching for '{{ query }}'.</span>
</div>
<!-- Sub-template if have any suggestions -->
<template v-if="suggestions.length > 0">
<div class="misc-item">
<span>{{ suggestions.length }} suggestions are shown...</span>
</div>
<hr>
</template>
<!-- Show "No result" otherwise, if not loading new ones -->
<div class="misc-item" v-else-if="!loading">
<span>No results</span>
</div>
</template>
Github Repository
Tags: #VueJs |
agronholm on v3.7.0
agronholm on 3.x
Added the release version (compare)
agronholm on 3.x
Added mention of flask-apschedu… Made the schedulers explicitly … (compare)
agronholm on 3.x
Conditionally import BrokenProc… (compare)
agronholm on 3.x
Skip test_broken_pool on py2.7 Documented the None value for m… (compare)
@app.get("/schedule/show_schedules/",response_model=CurrentScheduledJobsResponse,tags=["schedule"])
async def get_scheduled_syncs():
"""
Will provide a list of currently Scheduled Tasks
"""
schedules = []
for job in Schedule.get_jobs():
schedules.append({"job_id": str(job.id), "run_frequency": str(job.trigger), "next_run": str(job.next_run_time)})
return {"jobs":schedules}
add_job()?
so İ have a Bot class which is from twitchio package, and that's running asynchronously.
what would I have to do if I wanted to make apshceduler and twitchio bot run together?
aim: I will update the data periodically so my bot will respond faster
from twitchio.ext import commands
class Bot(commands.Bot):
def __init__(self):
...
bot = Bot()
bot.run() # there is run_until_complete function in run()
maxiumum number of running instances reached(1) but I set max_instances to 5
try:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.bind(("127.0.0.1", 47200))
log.info("socket bind")
except socket.error:
log.info("socket error")
pass
else:
log.info("socket - register extensions")
scheduler = AsyncIOScheduler()
log.basicConfig()
log.getLogger("apscheduler").setLevel(log.DEBUG)
scheduler.start()
scheduler.add_job(
test_function,
trigger="interval",
id="test_job",
replace_existing=True,
seconds=30,
)
bot = Bot()
bot.run()
|
I am writing my first flask application. I am dealing with file uploads, and basically what I want is to read the data/content of the uploaded file without saving it and then print it on the resulting page. Yes, I am assuming that the user uploads a text file always.
Here is the simple upload function i am using:
@app.route('/upload/', methods=['GET', 'POST'])
def upload():
if request.method == 'POST':
file = request.files['file']
if file:
filename = secure_filename(file.filename)
file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
a = 'file uploaded'
return render_template('upload.html', data = a)
Right now, I am saving the file, but what I need is that 'a' variable to contain the content/data of the file .. any ideas? |
In my Home Assistant setup, I created an SMTP notification that would email my phone’s SMS gateway when certain conditions were met. Except it didn’t work.
When monitoring the log file, I found the following when the condition was supposed to trigger:
WARNING (MainThread) [homeassistant.core] Unable to find service notify/txt_smeg
When I did additional digging in the log file, I found this:
ERROR (Thread-6) [homeassistant.components.notify.smtp] Login not possible. Please check your setting and/or your credentials
Traceback (most recent call last):
File "/srv/homeassistant/lib/python3.5/site-packages/homeassistant/components/notify/smtp.py", line 120, in connection_is_valid
server = self.connect()
File "/srv/homeassistant/lib/python3.5/site-packages/homeassistant/components/notify/smtp.py", line 113, in connect
mail.login(self.username, self.password)
File "/usr/lib/python3.5/smtplib.py", line 729, in login
raise last_exception
File "/usr/lib/python3.5/smtplib.py", line 720, in login
initial_response_ok=initial_response_ok)
File "/usr/lib/python3.5/smtplib.py", line 641, in auth
raise SMTPAuthenticationError(code, resp)
smtplib.SMTPAuthenticationError: (535, b'5.7.8 Error: authentication failed:')
ERROR (MainThread) [homeassistant.components.notify] Failed to initialize notification service smtp
But everything in my configuration looked OK. I checked and the password works. Restarting the Home Assistant service resulted in the same error.
After seeing this discussion where a machine restart magically fixes this problem, I restarted my entire device. After it booted back up, the notification worked correctly. My only guess is that smtplib must keep a cache somewhere, or holds something in memory across application re-loads. |
0x00 关于cURL
cURL可以使用URL的语法模拟浏览器来传输数据,它支持FTP、FTPS、HTTP、HTTPS、GOPHER、TELNET、DICT、FILE以及LDAP等多种协议。
利用cURL可以实现:HTTPS认证、HTTP POST方法、HTTP PUT方法、FTP上传、keyberos认证、代理服务器、cookies、用户名/密码认证、下载文件断点续传、上传文件断点续传、http代理服务器管道等等。
0x01 pycurl常见方法
创建curl对象
c = pycurl.Curl() #创建一个curl对象
设置请求
c.setopt(pycurl.URL,"http://www.baidu.com") #指定请求的URL
c.setopt(pycurl.CONNECTTIMEOUT, 5) #连接的等待时间,设置为0则不等待
c.setopt(pycurl.TIMEOUT, 5) #请求超时时间
c.setopt(pycurl.USERAGENT,"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:46.0) Gecko/20100101 Firefox/46.0") #配置User-Agent
c.setopt(pycurl.NOPROGRESS, 0) #是否屏蔽下载进度条,非0则屏蔽
c.setopt(pycurl.MAXREDIRS, 5) #指定HTTP重定向的最大数
c.setopt(pycurl.FORBID_REUSE, 1) #完成交互后强制断开连接,不重用
c.setopt(pycurl.FRESH_CONNECT,1) #强制获取新的连接,即替代缓存中的连接
c.setopt(pycurl.DNS_CACHE_TIMEOUT,60) #设置保存DNS信息的时间,默认为120秒
c.setopt(pycurl.HEADERFUNCTION, getheader) #将返回的HTTP HEADER定向到回调函数getheader
c.setopt(pycurl.WRITEFUNCTION, getbody) #将返回的内容定向到回调函数getbody
c.setopt(pycurl.WRITEHEADER, fileobj) #将返回的HTTP HEADER定向到fileobj文件对象
c.setopt(pycurl.WRITEDATA, fileobj) #将返回的HTML内容定向到fileobj文件对象
部分返回信息
c.getinfo(pycurl.HTTP_CODE) #返回的HTTP状态码
c.getinfo(pycurl.HEADER_SIZE) #HTTP头部大小
c.getinfo(pycurl.TOTAL_TIME) #传输结束所消耗的总时间
c.getinfo(pycurl.NAMELOOKUP_TIME) #DNS解析所消耗的时间
c.getinfo(pycurl.CONNECT_TIME) #建立连接所消耗的时间
c.getinfo(pycurl.PRETRANSFER_TIME) #从建立连接到准备传输所消耗的时间
c.getinfo(pycurl.STARTTRANSFER_TIME)#从建立连接到传输开始消耗的时间
c.getinfo(pycurl.REDIRECT_TIME) #重定向所消耗的时间
c.getinfo(pycurl.SIZE_UPLOAD) #上传数据包大小
c.getinfo(pycurl.SIZE_DOWNLOAD) #下载数据包大小
c.getinfo(pycurl.SPEED_DOWNLOAD) #平均下载速度
c.getinfo(pycurl.SPEED_UPLOAD) #平均上传速度
0x02 简单使用
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import pycurl
import StringIO
buf = StringIO.StringIO()
c = pycurl.Curl()
c.setopt(pycurl.URL, "http://127.0.0.1/site/range/sqli/sqli1.php")
c.setopt(pycurl.TIMEOUT, 15)
c.setopt(pycurl.FOLLOWLOCATION, 1) #允许跟踪来源
c.setopt(pycurl.MAXREDIRS, 5)
c.setopt(pycurl.SSL_VERIFYPEER, 0)
c.setopt(pycurl.SSL_VERIFYHOST, 0)
c.setopt(pycurl.USERAGENT,"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:46.0) Gecko/20100101 Firefox/46.0")
c.setopt(pycurl.WRITEFUNCTION, buf.write) #将返回的内容定向到回调函数write
c.perform()
status_code = c.getinfo(pycurl.HTTP_CODE) #返回的HTTP状态码
content_size = c.getinfo(pycurl.SIZE_DOWNLOAD) #返回的数据的大小
content =buf.getvalue()
print status_code
print content_size
print content
|
blob: c24aa879920bec20de0fde9a919bb6b4ae3beb25 (
plain
)
# Copyright (C) 2006, Red Hat, Inc.
# Copyright (C) 2007, One Laptop Per Child
# Copyright (C) 2009, Tomeu Vizoso
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
from gettext import gettext as _
from gi.repository import GObject
from gi.repository import Gtk
from gi.repository import Gdk
from gi.repository import Pango
from gi.repository import WebKit
from sugar3.graphics.toolbutton import ToolButton
from sugar3.graphics import iconentry
from sugar3.graphics.toolbarbox import ToolbarBox as ToolbarBase
from sugar3.activity.widgets import ActivityToolbarButton
from sugar3.activity.widgets import StopButton
import filepicker
import places
from sugarmenuitem import SugarMenuItem
from browser import Browser
_MAX_HISTORY_ENTRIES = 15
class WebEntry(iconentry.IconEntry):
_COL_ADDRESS = 0
_COL_TITLE = 1
def __init__(self):
GObject.GObject.__init__(self)
self._address = None
self._title = None
self._search_view = self._search_create_view()
self._search_window = Gtk.Window(type=Gtk.WindowType.POPUP)
self._search_window.add(self._search_view)
self._search_view.show()
self.connect('focus-in-event', self.__focus_in_event_cb)
self.connect('populate-popup', self.__populate_popup_cb)
self.connect('key-press-event', self.__key_press_event_cb)
self.connect('enter-notify-event', self.__enter_notify_event_cb)
self.connect('leave-notify-event', self.__leave_notify_event_cb)
self._focus_out_hid = self.connect(
'focus-out-event', self.__focus_out_event_cb)
self._change_hid = self.connect('changed', self.__changed_cb)
def _set_text(self, text):
"""Set the text but block changes notification, so that we can
recognize changes caused directly by user actions"""
self.handler_block(self._change_hid)
try:
self.props.text = text
finally:
self.handler_unblock(self._change_hid)
def activate(self, uri):
self._set_text(uri)
self._search_popdown()
self.emit('activate')
def _set_address(self, address):
self._address = address
if address is not None and self.props.has_focus:
self._set_text(address)
address = GObject.property(type=str, setter=_set_address)
def _set_title(self, title):
self._title = title
if title is not None and not self.props.has_focus:
self._set_text(title)
title = GObject.property(type=str, setter=_set_title)
def _search_create_view(self):
view = Gtk.TreeView()
view.props.headers_visible = False
view.connect('button-press-event', self.__view_button_press_event_cb)
column = Gtk.TreeViewColumn()
view.append_column(column)
cell = Gtk.CellRendererText()
cell.props.ellipsize = Pango.EllipsizeMode.END
cell.props.ellipsize_set = True
cell.props.font = 'Bold'
column.pack_start(cell, True)
column.add_attribute(cell, 'text', self._COL_TITLE)
cell = Gtk.CellRendererText()
cell.props.ellipsize = Pango.EllipsizeMode.END
cell.props.ellipsize_set = True
cell.props.alignment = Pango.Alignment.LEFT
column.pack_start(cell, True)
column.add_attribute(cell, 'text', self._COL_ADDRESS)
return view
def _search_update(self):
list_store = Gtk.ListStore(str, str)
for place in places.get_store().search(self.props.text):
list_store.append([place.uri, place.title])
self._search_view.set_model(list_store)
return len(list_store) > 0
def _search_popup(self):
miss, window_x, window_y = self.props.window.get_origin()
entry_allocation = self.get_allocation()
search_x = window_x + entry_allocation.x
search_y = window_y + entry_allocation.y + entry_allocation.height
search_width = entry_allocation.width
search_height = Gdk.Screen.height() / 3
self._search_window.move(search_x, search_y)
self._search_window.resize(search_width, search_height)
self._search_window.show()
def _search_popdown(self):
self._search_window.hide()
def __focus_in_event_cb(self, entry, event):
self._set_text(self._address)
self._search_popdown()
def __focus_out_event_cb(self, entry, event):
self._set_text(self._title)
self._search_popdown()
def __enter_notify_event_cb(self, entry, event):
if not entry.props.has_focus:
self._set_text(self._address)
def __leave_notify_event_cb(self, entry, event):
if not entry.props.has_focus:
self._set_text(self._title)
def __view_button_press_event_cb(self, view, event):
model = view.get_model()
path, col_, x_, y_ = view.get_path_at_pos(int(event.x), int(event.y))
if path:
uri = model[path][self._COL_ADDRESS]
self.activate(uri)
def __key_press_event_cb(self, entry, event):
keyname = Gdk.keyval_name(event.keyval)
selection = self._search_view.get_selection()
model, selected = selection.get_selected()
if keyname == 'Up':
if selected is None:
selection.select_iter(model[-1].iter)
self._set_text(model[-1][0])
else:
up_iter = model.iter_previous(selected)
if up_iter:
selection.select_iter(up_iter)
self._set_text(model.get(up_iter, 0)[0])
return True
elif keyname == 'Down':
if selected is None:
down_iter = model.get_iter_first()
else:
down_iter = model.iter_next(selected)
if down_iter:
selection.select_iter(down_iter)
self._set_text(model.get(down_iter, 0)[0])
return True
elif keyname == 'Return':
if selected is None:
return False
uri = model[model.get_path(selected)][self._COL_ADDRESS]
self.activate(uri)
return True
elif keyname == 'Escape':
self._search_window.hide()
return True
return False
def __popup_unmap_cb(self, entry):
self.handler_unblock(self._focus_out_hid)
def __populate_popup_cb(self, entry, menu):
self.handler_block(self._focus_out_hid)
menu.connect('unmap', self.__popup_unmap_cb)
def __changed_cb(self, entry):
self._address = self.props.text
if not self.props.text or not self._search_update():
self._search_popdown()
else:
self._search_popup()
class PrimaryToolbar(ToolbarBase):
__gtype_name__ = 'PrimaryToolbar'
__gsignals__ = {
'add-link': (GObject.SignalFlags.RUN_FIRST,
None,
([])),
'go-home': (GObject.SignalFlags.RUN_FIRST,
None,
([])),
}
def __init__(self, tabbed_view, act):
ToolbarBase.__init__(self)
self._activity = act
self._tabbed_view = tabbed_view
self._loading = False
self._title = _('Untitled')
toolbar = self.toolbar
activity_button = ActivityToolbarButton(self._activity)
toolbar.insert(activity_button, 0)
self._go_home = ToolButton('go-home')
self._go_home.set_tooltip(_('Home page'))
self._go_home.connect('clicked', self._go_home_cb)
toolbar.insert(self._go_home, -1)
self._go_home.show()
self.entry = WebEntry()
self.entry.set_icon_from_name(iconentry.ICON_ENTRY_SECONDARY,
'browse-dialog-cancel')
self.entry.connect('icon-press', self._stop_and_reload_cb)
self.entry.connect('activate', self._entry_activate_cb)
entry_item = Gtk.ToolItem()
entry_item.set_expand(True)
entry_item.add(self.entry)
self.entry.show()
toolbar.insert(entry_item, -1)
entry_item.show()
self._back = ToolButton('go-previous-paired')
self._back.set_tooltip(_('Back'))
self._back.props.sensitive = False
self._back.connect('clicked', self._go_back_cb)
toolbar.insert(self._back, -1)
self._back.show()
palette = self._back.get_palette()
self._back_box_menu = Gtk.VBox()
self._back_box_menu.show()
palette.set_content(self._back_box_menu)
# FIXME, this is a hack, should be done in the theme:
palette._content.set_border_width(1)
self._forward = ToolButton('go-next-paired')
self._forward.set_tooltip(_('Forward'))
self._forward.props.sensitive = False
self._forward.connect('clicked', self._go_forward_cb)
toolbar.insert(self._forward, -1)
self._forward.show()
palette = self._forward.get_palette()
self._forward_box_menu = Gtk.VBox()
self._forward_box_menu.show()
palette.set_content(self._forward_box_menu)
# FIXME, this is a hack, should be done in the theme:
palette._content.set_border_width(1)
self._link_add = ToolButton('emblem-favorite')
self._link_add.set_tooltip(_('Bookmark'))
self._link_add.connect('clicked', self._link_add_clicked_cb)
toolbar.insert(self._link_add, -1)
self._link_add.show()
stop_button = StopButton(self._activity)
toolbar.insert(stop_button, -1)
self._progress_listener = None
self._browser = None
self._loading_changed_hid = None
self._progress_changed_hid = None
self._session_history_changed_hid = None
self._title_changed_hid = None
self._uri_changed_hid = None
if tabbed_view.get_n_pages():
self._connect_to_browser(tabbed_view.props.current_browser)
tabbed_view.connect_after('switch-page', self.__switch_page_cb)
def __switch_page_cb(self, tabbed_view, page, page_num):
if tabbed_view.get_n_pages():
self._connect_to_browser(tabbed_view.props.current_browser)
def _connect_to_browser(self, browser):
if self._browser is not None:
self._browser.disconnect(self._title_changed_hid)
self._browser.disconnect(self._uri_changed_hid)
self._browser.disconnect(self._progress_changed_hid)
self._browser.disconnect(self._loading_changed_hid)
self._browser = browser
if self._browser.props.title:
self._set_title(self._browser.props.title)
else:
self._set_title(_('Untitled'))
self._set_address(self._browser.props.uri)
self._set_progress(self._browser.props.progress)
self._set_status(self._browser.props.load_status)
is_webkit_browser = isinstance(self._browser, Browser)
self.entry.props.editable = is_webkit_browser
self._title_changed_hid = self._browser.connect(
'notify::title', self._title_changed_cb)
self._uri_changed_hid = self._browser.connect(
'notify::uri', self.__uri_changed_cb)
self._progress_changed_hid = self._browser.connect(
'notify::progress', self.__progress_changed_cb)
self._loading_changed_hid = self._browser.connect(
'notify::load-status', self.__loading_changed_cb)
self._update_navigation_buttons()
def __loading_changed_cb(self, widget, param):
status = widget.get_load_status()
if status == WebKit.LoadStatus.FAILED:
self.entry._set_title(self._title)
elif WebKit.LoadStatus.PROVISIONAL <= status \
< WebKit.LoadStatus.FINISHED:
self.entry._set_title(_('Loading...'))
elif status == WebKit.LoadStatus.FINISHED:
if widget.props.title == None:
self.entry._set_title(_('Untitled'))
self._title = _('Untitled')
self._set_status(widget.get_load_status())
def __progress_changed_cb(self, widget, param):
self._set_progress(widget.get_progress())
def _set_status(self, status):
self._set_loading(status < WebKit.LoadStatus.FINISHED)
def _set_progress(self, progress):
if progress == 1.0:
self.entry.set_progress_fraction(0.0)
else:
self.entry.set_progress_fraction(progress)
def _set_address(self, uri):
if uri is None:
self.entry.props.address = ''
else:
self.entry.props.address = uri
def _set_title(self, title):
self.entry.props.title = title
self._title = title
def _show_stop_icon(self):
self.entry.set_icon_from_name(iconentry.ICON_ENTRY_SECONDARY,
'browse-dialog-cancel')
def _show_reload_icon(self):
self.entry.set_icon_from_name(iconentry.ICON_ENTRY_SECONDARY,
'browse-view-refresh')
def _update_navigation_buttons(self):
can_go_back = self._browser.can_go_back()
self._back.props.sensitive = can_go_back
can_go_forward = self._browser.can_go_forward()
self._forward.props.sensitive = can_go_forward
is_webkit_browser = isinstance(self._browser, Browser)
self._link_add.props.sensitive = is_webkit_browser
self._go_home.props.sensitive = is_webkit_browser
if is_webkit_browser:
self._reload_session_history()
def _entry_activate_cb(self, entry):
url = entry.props.text
effective_url = self._tabbed_view.normalize_or_autosearch_url(url)
self._browser.load_uri(effective_url)
self._browser.grab_focus()
def _go_home_cb(self, button):
self.emit('go-home')
def _go_back_cb(self, button):
self._browser.go_back()
def _go_forward_cb(self, button):
self._browser.go_forward()
def _title_changed_cb(self, widget, param):
self._set_title(widget.get_title())
def __uri_changed_cb(self, widget, param):
self._set_address(widget.get_uri())
self._update_navigation_buttons()
filepicker.cleanup_temp_files()
def _stop_and_reload_cb(self, entry, icon_pos, button):
if self._loading:
self._browser.stop_loading()
else:
self._browser.reload()
def _set_loading(self, loading):
self._loading = loading
if self._loading:
self._show_stop_icon()
else:
self._show_reload_icon()
def _reload_session_history(self):
back_forward_list = self._browser.get_back_forward_list()
item_index = 0 # The index of the history item
# Clear menus in palettes:
for box_menu in (self._back_box_menu, self._forward_box_menu):
for menu_item in box_menu.get_children():
box_menu.remove(menu_item)
def create_menu_item(history_item, item_index):
"""Create a MenuItem for the back or forward palettes."""
title = history_item.get_title()
if not isinstance(title, unicode):
title = unicode(title, 'utf-8')
# This is a fix until the Sugar MenuItem is fixed:
menu_item = SugarMenuItem(text_label=title)
menu_item.connect('clicked', self._history_item_activated_cb,
item_index)
return menu_item
back_list = back_forward_list.get_back_list_with_limit(
_MAX_HISTORY_ENTRIES)
back_list.reverse()
for item in back_list:
menu_item = create_menu_item(item, item_index)
self._back_box_menu.pack_end(menu_item, False, False, 0)
menu_item.show()
item_index += 1
# Increment the item index to count the current page:
item_index += 1
forward_list = back_forward_list.get_forward_list_with_limit(
_MAX_HISTORY_ENTRIES)
forward_list.reverse()
for item in forward_list:
menu_item = create_menu_item(item, item_index)
self._forward_box_menu.pack_start(menu_item, False, False, 0)
menu_item.show()
item_index += 1
def _history_item_activated_cb(self, menu_item, index):
self._browser.set_history_index(index)
def _link_add_clicked_cb(self, button):
self.emit('add-link')
|
const Topo = require('@hapi/topo'); let list = new Topo() let counter= 0 list.add('one', {group:'one'}) //this package requires adding the group name so we make it the same list.add('four', {group: 'four', after: 'one', sort: counter++}) list.add('three', {group:'three', before: 'four', after:'two', sort: counter++}) list.add('two', {group: 'two', after: 'one', sort: counter++}) list.nodes //returns ['one', 'two', 'three', 'four'] //example from Asthmatic's comment list = new Topo() counter = 0 list.add('one', {group:'one', sort: counter++}) //this package requires adding the group name so we make it the same list.add('four', {group: 'four', after: 'one', sort: counter++}) list.add('two', {group: 'two', after: 'one', sort: counter++}) list.nodes // returns ['one', 'four', 'two']
sorted_lists = sorted(izip(a, b, c, d, score), reverse=True, key=lambda x: x[4]) a, b, c, d, score = [[x[i] for x in sorted_lists] for i in range(5)]
a, b, c, d, score = izip(*sorted(izip(a, b, c, d, score), reverse=True, key=lambda x: x[4]))
def sort_lists_by(lists, key_list=0, desc=False): return izip(*sorted(izip(*lists), reverse=desc, key=lambda x: x[key_list]))
// Very simple, rudimentary function to translate a type to number. Improve at will. function typeIndex(x) { if (x.indexOf('translate') > -1) return 0; if (x.indexOf('rotate') > -1) return 1; if (x.indexOf('skew') > -1) return 2; if (x.indexOf('scale') > -1) return 3; return 1000; // Unknown } var p = ['skewX', 'rotateY', 'rotateZ', 'translateY', 'scale', 'rotateX', 'ordered skewing']; // Sort array using callback; p.sort(function(a, b){ // First compare the difference based on type. var result = typeIndex(a) - typeIndex(b); // If the difference is 0, they are of the same type. Compare the whole string. if (result == 0) result = a.localeCompare(b); return result; }); console.log(p);
s = ["hello there you would like to sort me", "sorted i would like to be", "the banana does not taste like the orange", "my friend said hello", "hello there amigo", "apple apple banana orange peach pear plum", "orange is my favorite color"] from collections import Counter def create_word_freq_dict(series): return Counter(word for row in series for word in row.lower().split()) word_counts = create_word_freq_dict(s) for row in s: print sorted(row.lower().split(), lambda x, y: word_counts[y] - word_counts[x])
['hello', 'like', 'there', 'would', 'to', 'you', 'sort', 'me']['like', 'would', 'to', 'sorted', 'i', 'be']['like', 'orange', 'the', 'banana', 'the', 'does', 'not', 'taste']['hello', 'my', 'friend', 'said']['hello', 'there', 'amigo']['orange', 'apple', 'apple', 'banana', 'peach', 'pear', 'plum']['orange', 'my', 'is', 'favorite', 'color']
for row in s: sorted_row = sorted(row.split(), lambda x, y: word_counts[y] - word_counts[x]) print zip(sorted_row, map(lambda x: word_counts[x], sorted_row))
[('hello', 3), ('like', 3), ('there', 2), ('would', 2), ('to', 2), ('you', 1), ('sort', 1), ('me', 1)][('like', 3), ('would', 2), ('to', 2), ('sorted', 1), ('i', 1), ('be', 1)][('like', 3), ('orange', 3), ('the', 2), ('banana', 2), ('the', 2), ('does', 1), ('not', 1), ('taste', 1)][('hello', 3), ('my', 2), ('friend', 1), ('said', 1)][('hello', 3), ('there', 2), ('amigo', 1)][('orange', 3), ('apple', 2), ('apple', 2), ('banana', 2), ('peach', 1), ('pear', 1), ('plum', 1)][('orange', 3), ('my', 2), ('is', 1), ('favorite', 1), ('color', 1)]
s = ["hello there you would like to sort me", "sorted i would like to be", "the banana does not taste like the orange", "my friend said hello", "hello there amigo", "apple apple banana orange peach pear plum", "orange is my favorite color"] from functools import cmp_to_key from collections import Counter def create_word_freq_dict(series): return Counter(word for row in series for word in row.lower().split()) word_counts = create_word_freq_dict(s) for row in s: sorted_row = sorted(row.split(), key=cmp_to_key(lambda x, y: word_counts[y] - word_counts[x])) print(sorted_row)
|
blob: adf5a15106164a9e8af78c8e6fafca79c359d538 (
plain
)
# -*- coding: utf-8 -*-
#Copyright (c) 2010-11 Walter Bender
#Permission is hereby granted, free of charge, to any person obtaining a copy
#of this software and associated documentation files (the "Software"), to deal
#in the Software without restriction, including without limitation the rights
#to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
#copies of the Software, and to permit persons to whom the Software is
#furnished to do so, subject to the following conditions:
#The above copyright notice and this permission notice shall be included in
#all copies or substantial portions of the Software.
#THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
#IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
#FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
#AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
#LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
#OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
#THE SOFTWARE.
from gettext import gettext as _
#
# Sprite layers
#
HIDE_LAYER = 100
CANVAS_LAYER = 500
OVERLAY_LAYER = 525
TURTLE_LAYER = 550
BLOCK_LAYER = 600
CATEGORY_LAYER = 700
TAB_LAYER = 710
STATUS_LAYER = 900
TOP_LAYER = 1000
# Special-case some block colors
BOX_COLORS = {'red': ["#FF0000", "#A00000"],
'orange': ["#FFD000", "#AA8000"],
'yellow': ["#FFFF00", "#A0A000"],
'blue': ["#0000FF", "#000080"],
'cyan': ["#00FFFF", "#00A0A0"],
'green': ["#00FF00", "#008000"],
'purple': ["#FF00FF", "#A000A0"],
'white': ["#FFFFFF", "#A0A0A0"],
'black': ["#000000", "#000000"]}
#
# Misc. parameters
#
PALETTE_HEIGHT = 120
PALETTE_WIDTH = 175
SELECTOR_WIDTH = 55
ICON_SIZE = 55
GRADIENT_COLOR = "#FFFFFF"
STANDARD_STROKE_WIDTH = 1.0
BLOCK_SCALE = 2.0
PALETTE_SCALE = 1.5
DEFAULT_TURTLE = 'Yertle'
DEFAULT_TURTLE_COLORS = ['#008000', '#00A000']
HORIZONTAL_PALETTE = 0
VERTICAL_PALETTE = 1
BLACK = -9999
WHITE = -9998
HIT_HIDE = 248
HIT_SHOW = 240
HIT_RED = "#F80000"
HIT_GREEN = "#00F000"
HIDE_WHITE = "#F8F8F8"
SHOW_WHITE = "#F0F0F0"
DEFAULT_SCALE = 33
XO1 = 'xo1'
XO15 = 'xo1.5'
UNKNOWN = 'unknown'
#
# Blocks that are expandable
#
EXPANDABLE_STYLE = ['boolean-style', 'compare-porch-style', 'compare-style',
'number-style-porch', 'number-style', 'basic-style-2arg']
EXPANDABLE = ['vspace', 'hspace', 'identity2']
EXPANDABLE_ARGS = ['list', 'myfunc1arg', 'myfunc2arg',
'myfunc3arg', 'userdefined', 'userdefined2args',
'userdefined3args']
#
# Blocks that are 'collapsible'
#
COLLAPSIBLE = ['sandwichbottom', 'sandwichcollapsed']
#
# Deprecated block styles that need dock adjustments
#
OLD_DOCK = ['and', 'or', 'plus', 'minus', 'division', 'product', 'remainder']
#
# These blocks get a special skin
#
BLOCKS_WITH_SKIN = ['journal', 'audio', 'description', 'nop', 'userdefined',
'video', 'userdefined2args', 'userdefined3args', 'camera']
PYTHON_SKIN = ['nop', 'userdefined', 'userdefined2args', 'userdefined3args']
#
# These blocks hold constants
#
CONSTANTS = {'leftpos': None, 'toppos': None, 'rightpos': None,
'bottompos': None, 'width': None, 'height': None, 'red': 0,
'orange': 10, 'yellow': 20, 'green': 40, 'cyan': 50, 'blue': 70,
'purple': 90, 'titlex': None, 'titley': None, 'leftx': None,
'topy': None, 'rightx': None, 'bottomy': None}
#
# Blocks that can interchange strings and numbers for their arguments
#
STRING_OR_NUMBER_ARGS = ['plus2', 'equal2', 'less2', 'greater2', 'box',
'template1x1', 'template1x2', 'template2x1', 'list',
'template2x2', 'template1x1a', 'templatelist', 'nop',
'print', 'stack', 'hat', 'addturtle', 'myfunc',
'myfunc1arg', 'myfunc2arg', 'myfunc3arg', 'comment',
'sandwichtop', 'sandwichtop_no_arm', 'userdefined',
'userdefined2args', 'userdefined3args', 'storein']
CONTENT_ARGS = ['show', 'showaligned', 'push', 'storein', 'storeinbox1',
'storeinbox2']
PREFIX_DICTIONARY = {'journal': '#smedia_', 'description': '#sdescr_',
'audio': '#saudio_', 'video': '#svideo_'}
#
# Status blocks
#
MEDIA_SHAPES = ['audiooff', 'audioon', 'audiosmall',
'videooff', 'videoon', 'videosmall',
'cameraoff', 'camerasmall',
'journaloff', 'journalon', 'journalsmall',
'descriptionoff', 'descriptionon', 'descriptionsmall',
'pythonoff', 'pythonon', 'pythonsmall',
'list', '1x1', '1x1a', '2x1', '1x2', '2x2']
OVERLAY_SHAPES = ['Cartesian', 'Cartesian_labeled', 'polar']
STATUS_SHAPES = ['status', 'info', 'nostack', 'dupstack', 'noinput',
'emptyheap', 'emptybox', 'nomedia', 'nocode', 'overflowerror',
'negroot', 'syntaxerror', 'nofile', 'nojournal', 'zerodivide',
'notanumber', 'incompatible']
#
# Emulate Sugar toolbar when running from outside of Sugar
#
TOOLBAR_SHAPES = ['hideshowoff', 'eraseron', 'run-fastoff',
'run-slowoff', 'debugoff', 'stopiton']
#
# Legacy names
#
OLD_NAMES = {'product': 'product2', 'storeinbox': 'storein', 'minus': 'minus2',
'division': 'division2', 'plus': 'plus2', 'and': 'and2',
'or': 'or2', 'less': 'less2', 'greater': 'greater2',
'equal': 'equal2', 'remainder': 'remainder2',
'identity': 'identity2', 'division': 'division2',
'audiooff': 'audio', 'endfill': 'stopfill',
'descriptionoff': 'description', 'template3': 'templatelist',
'template1': 'template1x1', 'template2': 'template2x1',
'template6': 'template1x2', 'template7': 'template2x2',
'template4': 'template1x1a', 'hres': 'width', 'vres': 'height',
'sandwichtop2': 'sandwichtop', 'image': 'show',
'container': 'indentity2', 'insertimage': 'show'}
#
# Define the relative size and postion of media objects
# (w, h, x, y, dx, dy)
#
TITLEXY = (0.9375, 0.875)
#
# Relative placement of portfolio objects (used by deprecated blocks)
#
TEMPLATES = {'t1x1': (0.5, 0.5, 0.0625, 0.125, 1.05, 0),
't2z1': (0.5, 0.5, 0.0625, 0.125, 1.05, 1.05),
't1x2': (0.45, 0.45, 0.0625, 0.125, 1.05, 1.05),
't2x2': (0.45, 0.45, 0.0625, 0.125, 1.05, 1.05),
't1x1a': (0.9, 0.9, 0.0625, 0.125, 0, 0),
'bullet': (1, 1, 0.0625, 0.125, 0, 0.1),
'insertimage': (0.333, 0.333)}
#
# 'dead key' Unicode dictionaries
#
DEAD_KEYS = ['grave', 'acute', 'circumflex', 'tilde', 'diaeresis', 'abovering']
DEAD_DICTS = [{'A': 192, 'E': 200, 'I': 204, 'O': 210, 'U': 217, 'a': 224,
'e': 232, 'i': 236, 'o': 242, 'u': 249},
{'A': 193, 'E': 201, 'I': 205, 'O': 211, 'U': 218, 'a': 225,
'e': 233, 'i': 237, 'o': 243, 'u': 250},
{'A': 194, 'E': 202, 'I': 206, 'O': 212, 'U': 219, 'a': 226,
'e': 234, 'i': 238, 'o': 244, 'u': 251},
{'A': 195, 'O': 211, 'N': 209, 'U': 360, 'a': 227, 'o': 245,
'n': 241, 'u': 361},
{'A': 196, 'E': 203, 'I': 207, 'O': 211, 'U': 218, 'a': 228,
'e': 235, 'i': 239, 'o': 245, 'u': 252},
{'A': 197, 'a': 229}]
NOISE_KEYS = ['Shift_L', 'Shift_R', 'Control_L', 'Caps_Lock', 'Pause',
'Alt_L', 'Alt_R', 'KP_Enter', 'ISO_Level3_Shift', 'KP_Divide',
'Escape', 'Return', 'KP_Page_Up', 'Up', 'Down', 'Menu',
'Left', 'Right', 'KP_Home', 'KP_End', 'KP_Up', 'Super_L',
'KP_Down', 'KP_Left', 'KP_Right', 'KP_Page_Down', 'Scroll_Lock',
'Page_Down', 'Page_Up']
WHITE_SPACE = ['space', 'Tab']
CURSOR = 'â–ˆ'
RETURN = 'âŽ'
#
# Macros (groups of blocks)
#
MACROS = {
'until':
[[0, 'forever', 0, 0, [None, 2, 1]],
[1, 'vspace', 0, 0, [0, None]],
[2, 'ifelse', 0, 0, [0, None, 3, None, None]],
[3, 'vspace', 0, 0, [2, 4]],
[4, 'stopstack', 0, 0, [3, None]]],
'while':
[[0, 'forever', 0, 0, [None, 2, 1]],
[1, 'vspace', 0, 0, [0, None]],
[2, 'ifelse', 0, 0, [0, None, 3, 4, None]],
[3, 'vspace', 0, 0, [2, None]],
[4, 'stopstack', 0, 0, [2, None]]],
'kbinput':
[[0, 'forever', 0, 0, [None, 1, None]],
[1, 'kbinput', 0, 0, [0, 2]],
[2, 'vspace', 0, 0, [1, 3]],
[3, 'if', 0, 0, [2, 4, 7, 8]],
[4, 'greater2', 0, 0, [3, 5, 6, None]],
[5, 'keyboard', 0, 0, [4, None]],
[6, ['number', '0'], 0, 0, [4, None]],
[7, 'stopstack', 0, 0, [3, None]],
[8, 'vspace', 0, 0, [3, 9]],
[9, 'wait', 0, 0, [8, 10, None]],
[10, ['number', '1'], 0, 0, [9, None]]],
'picturelist':
[[0, 'sandwichtop_no_label', 0, 0, [None, 1]],
[1, 'penup', 0, 0, [0, 2]],
[2, 'setxy2', 0, 0, [1, 3, 4, 5]],
[3, 'titlex', 0, 0, [2, None]],
[4, 'titley', 0, 0, [2, None]],
[5, 'pendown', 0, 0, [2, 6]],
[6, 'setscale', 0, 0, [5, 7, 8]],
[7, ['number', '100'], 0, 0, [6, None]],
[8, 'show', 0, 0, [6, 9, 10]],
[9, ['string', _('Title')], 0, 0, [8, None]],
[10, 'penup', 0, 0, [8, 11]],
[11, 'setxy2', 0, 0, [10, 12, 13, 14]],
[12, 'leftx', 0, 0, [11, None]],
[13, 'topy', 0, 0, [11, None]],
[14, 'pendown', 0, 0, [11, 15]],
[15, 'setscale', 0, 0, [14, 16, 17]],
[16, ['number', '67'], 0, 0, [15, None]],
[17, 'list', 0, 0, [15, 18, 19, 20]],
[18, ['string', '∙ '], 0, 0, [17, None]],
[19, ['string', '∙ '], 0, 0, [17, None]],
[20, 'sandwichbottom', 0, 0, [17, None]]],
'picture1x1a':
[[0, 'sandwichtop_no_label', 0, 0, [None, 1]],
[1, 'penup', 0, 0, [0, 2]],
[2, 'setxy2', 0, 0, [1, 3, 4, 5]],
[3, 'titlex', 0, 0, [2, None]],
[4, 'titley', 0, 0, [2, None]],
[5, 'pendown', 0, 0, [2, 6]],
[6, 'setscale', 0, 0, [5, 7, 8]],
[7, ['number', '100'], 0, 0, [6, None]],
[8, 'show', 0, 0, [6, 9, 10]],
[9, ['string', _('Title')], 0, 0, [8, None]],
[10, 'penup', 0, 0, [8, 11]],
[11, 'setxy2', 0, 0, [10, 12, 13, 14]],
[12, 'leftx', 0, 0, [11, None]],
[13, 'topy', 0, 0, [11, None]],
[14, 'pendown', 0, 0, [11, 15]],
[15, 'setscale', 0, 0, [14, 16, 17]],
[16, ['number', '90'], 0, 0, [15, None]],
[17, 'showaligned', 0, 0, [15, 18, 19]],
[18, 'journal', 0, 0, [17, None]],
[19, 'sandwichbottom', 0, 0, [17, None]]],
'picture2x2':
[[0, 'sandwichtop_no_label', 0, 0, [None, 1]],
[1, 'penup', 0, 0, [0, 2]],
[2, 'setxy2', 0, 0, [1, 3, 4, 5]],
[3, 'titlex', 0, 0, [2, None]],
[4, 'titley', 0, 0, [2, None]],
[5, 'pendown', 0, 0, [2, 6]],
[6, 'setscale', 0, 0, [5, 7, 8]],
[7, ['number', '100'], 0, 0, [6, None]],
[8, 'show', 0, 0, [6, 9, 10]],
[9, ['string', _('Title')], 0, 0, [8, None]],
[10, 'setscale', 0, 0, [8, 11, 12]],
[11, ['number', '35'], 0, 0, [10, None]],
[12, 'penup', 0, 0, [10, 13]],
[13, 'setxy2', 0, 0, [12, 14, 15, 16]],
[14, 'leftx', 0, 0, [13, None]],
[15, 'topy', 0, 0, [13, None]],
[16, 'pendown', 0, 0, [13, 17]],
[17, 'showaligned', 0, 0, [16, 18, 19]],
[18, 'journal', 0, 0, [17, None]],
[19, 'penup', 0, 0, [17, 20]],
[20, 'setxy2', 0, 0, [19, 21, 22, 23]],
[21, 'rightx', 0, 0, [20, None]],
[22, 'topy', 0, 0, [20, None]],
[23, 'pendown', 0, 0, [20, 24]],
[24, 'showaligned', 0, 0, [23, 25, 26]],
[25, 'journal', 0, 0, [24, None]],
[26, 'penup', 0, 0, [24, 27]],
[27, 'setxy2', 0, 0, [26, 28, 29, 30]],
[28, 'leftx', 0, 0, [27, None]],
[29, 'bottomy', 0, 0, [27, None]],
[30, 'pendown', 0, 0, [27, 31]],
[31, 'showaligned', 0, 0, [30, 32, 33]],
[32, 'journal', 0, 0, [31, None]],
[33, 'penup', 0, 0, [31, 34]],
[34, 'setxy2', 0, 0, [33, 35, 36, 37]],
[35, 'rightx', 0, 0, [34, None]],
[36, 'bottomy', 0, 0, [34, None]],
[37, 'pendown', 0, 0, [34, 38]],
[38, 'showaligned', 0, 0, [37, 39, 40]],
[39, 'journal', 0, 0, [38, None]],
[40, 'sandwichbottom', 0, 0, [38, None]]],
'picture1x2':
[[0, 'sandwichtop_no_label', 0, 0, [None, 1]],
[1, 'penup', 0, 0, [0, 2]],
[2, 'setxy2', 0, 0, [1, 3, 4, 5]],
[3, 'titlex', 0, 0, [2, None]],
[4, 'titley', 0, 0, [2, None]],
[5, 'pendown', 0, 0, [2, 6]],
[6, 'setscale', 0, 0, [5, 7, 8]],
[7, ['number', '100'], 0, 0, [6, None]],
[8, 'show', 0, 0, [6, 9, 10]],
[9, ['string', _('Title')], 0, 0, [8, None]],
[10, 'setscale', 0, 0, [8, 11, 12]],
[11, ['number', '35'], 0, 0, [10, None]],
[12, 'penup', 0, 0, [10, 13]],
[13, 'setxy2', 0, 0, [12, 14, 15, 16]],
[14, 'leftx', 0, 0, [13, None]],
[15, 'topy', 0, 0, [13, None]],
[16, 'pendown', 0, 0, [13, 17]],
[17, 'showaligned', 0, 0, [16, 18, 19]],
[18, 'journal', 0, 0, [17, None]],
[19, 'penup', 0, 0, [17, 20]],
[20, 'setxy2', 0, 0, [19, 21, 22, 23]],
[21, 'rightx', 0, 0, [20, None]],
[22, 'topy', 0, 0, [20, None]],
[23, 'pendown', 0, 0, [20, 24]],
[24, 'showaligned', 0, 0, [23, 25, 26]],
[25, 'description', 0, 0, [24, None]],
[26, 'penup', 0, 0, [24, 27]],
[27, 'setxy2', 0, 0, [26, 28, 29, 30]],
[28, 'leftx', 0, 0, [27, None]],
[29, 'bottomy', 0, 0, [27, None]],
[30, 'pendown', 0, 0, [27, 31]],
[31, 'showaligned', 0, 0, [30, 32, 33]],
[32, 'journal', 0, 0, [31, None]],
[33, 'penup', 0, 0, [31, 34]],
[34, 'setxy2', 0, 0, [33, 35, 36, 37]],
[35, 'rightx', 0, 0, [34, None]],
[36, 'bottomy', 0, 0, [34, None]],
[37, 'pendown', 0, 0, [34, 38]],
[38, 'showaligned', 0, 0, [37, 39, 40]],
[39, 'description', 0, 0, [38, None]],
[40, 'sandwichbottom', 0, 0, [38, None]]],
'picture2x1':
[[0, 'sandwichtop_no_label', 0, 0, [None, 1]],
[1, 'penup', 0, 0, [0, 2]],
[2, 'setxy2', 0, 0, [1, 3, 4, 5]],
[3, 'titlex', 0, 0, [2, None]],
[4, 'titley', 0, 0, [2, None]],
[5, 'pendown', 0, 0, [2, 6]],
[6, 'setscale', 0, 0, [5, 7, 8]],
[7, ['number', '100'], 0, 0, [6, None]],
[8, 'show', 0, 0, [6, 9, 10]],
[9, ['string', _('Title')], 0, 0, [8, None]],
[10, 'setscale', 0, 0, [8, 11, 12]],
[11, ['number', '35'], 0, 0, [10, None]],
[12, 'penup', 0, 0, [10, 13]],
[13, 'setxy2', 0, 0, [12, 14, 15, 16]],
[14, 'leftx', 0, 0, [13, None]],
[15, 'topy', 0, 0, [13, None]],
[16, 'pendown', 0, 0, [13, 17]],
[17, 'showaligned', 0, 0, [16, 18, 19]],
[18, 'journal', 0, 0, [17, None]],
[19, 'penup', 0, 0, [17, 20]],
[20, 'setxy2', 0, 0, [19, 21, 22, 23]],
[21, 'rightx', 0, 0, [20, None]],
[22, 'topy', 0, 0, [20, None]],
[23, 'pendown', 0, 0, [20, 24]],
[24, 'showaligned', 0, 0, [23, 25, 26]],
[25, 'journal', 0, 0, [24, None]],
[26, 'penup', 0, 0, [24, 27]],
[27, 'setxy2', 0, 0, [26, 28, 29, 30]],
[28, 'leftx', 0, 0, [27, None]],
[29, 'bottomy', 0, 0, [27, None]],
[30, 'pendown', 0, 0, [27, 31]],
[31, 'showaligned', 0, 0, [30, 32, 33]],
[32, 'description', 0, 0, [31, None]],
[33, 'penup', 0, 0, [31, 34]],
[34, 'setxy2', 0, 0, [33, 35, 36, 37]],
[35, 'rightx', 0, 0, [34, None]],
[36, 'bottomy', 0, 0, [34, None]],
[37, 'pendown', 0, 0, [34, 38]],
[38, 'showaligned', 0, 0, [37, 39, 40]],
[39, 'description', 0, 0, [38, None]],
[40, 'sandwichbottom', 0, 0, [38, None]]],
'picture1x1':
[[0, 'sandwichtop_no_label', 0, 0, [None, 1]],
[1, 'penup', 0, 0, [0, 2]],
[2, 'setxy2', 0, 0, [1, 3, 4, 5]],
[3, 'titlex', 0, 0, [2, None]],
[4, 'titley', 0, 0, [2, None]],
[5, 'pendown', 0, 0, [2, 6]],
[6, 'setscale', 0, 0, [5, 7, 8]],
[7, ['number', '100'], 0, 0, [6, None]],
[8, 'show', 0, 0, [6, 9, 10]],
[9, ['string', _('Title')], 0, 0, [8, None]],
[10, 'setscale', 0, 0, [8, 11, 12]],
[11, ['number', '35'], 0, 0, [10, None]],
[12, 'penup', 0, 0, [10, 13]],
[13, 'setxy2', 0, 0, [12, 14, 15, 16]],
[14, 'leftx', 0, 0, [13, None]],
[15, 'topy', 0, 0, [13, None]],
[16, 'pendown', 0, 0, [13, 17]],
[17, 'showaligned', 0, 0, [16, 18, 19]],
[18, 'journal', 0, 0, [17, None]],
[19, 'penup', 0, 0, [17, 20]],
[20, 'setxy2', 0, 0, [19, 21, 22, 23]],
[21, 'rightx', 0, 0, [20, None]],
[22, 'topy', 0, 0, [20, None]],
[23, 'pendown', 0, 0, [20, 24]],
[24, 'showaligned', 0, 0, [23, 25, 26]],
[25, 'description', 0, 0, [24, None]],
[26, 'sandwichbottom', 0, 0, [24, None]]],
'reskin':
[[0, 'skin', 0, 0, [None, 1, None]],
[1, 'journal', 0, 0, [0, None]]]}
|
scikit-learnで学習した決定木の学習結果を確認するにはライブラリを使うのが便利ですが、
自分でも直接取得してみたかったので方法を調べてみました。
とりあえず、 iris を学習しておきます。dtreevizの記事とパラメーターを揃えたので、
この後の結果はそちらと見比べていただくとわかりやすいです。
ただし、最初の分岐が2パターンあって乱数でどちらになるか決まるので、運が悪いと結果が変わります。
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
iris = load_iris()
clf = DecisionTreeClassifier(min_samples_split=5)
clf.fit(
iris.data,
iris.target
)
"""
DecisionTreeClassifier(ccp_alpha=0.0, class_weight=None, criterion='gini',
max_depth=None, max_features=None, max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=5,
min_weight_fraction_leaf=0.0, presort='deprecated',
random_state=None, splitter='best')
"""
ロジスティック回帰などであれば、係数が coef_に入っているだけなので簡単なのですが、
決定木の場合読み解くのに少し手間がかかります。
その辺りのことは、ドキュメントにも
Understanding the decision tree structureとしてまとめてあるのでこちらも参照しながら読み解いてみました。
必要な情報は clf.tree_の属性としてまとまっているので順番に取り出してみます。
# ノードの数
n_nodes = clf.tree_.node_count
print(n_nodes)
# 13
# 各ノードに振り分けられた学習データの数。
node_values = clf.tree_.value
# 各ノードの左の子ノード。 葉の場合は -1
children_left = clf.tree_.children_left
print(children_left)
# [ 1 -1 3 4 5 -1 -1 8 -1 -1 11 -1 -1]
# 各ノードの右の子ノード。 葉の場合は -1
children_right = clf.tree_.children_right
print(children_right)
# [ 2 -1 10 7 6 -1 -1 9 -1 -1 12 -1 -1]
# 分割に使う特徴量。 葉の場合は-2
feature = clf.tree_.feature
print(feature)
# [ 3 -2 3 2 3 -2 -2 3 -2 -2 2 -2 -2]
# 分割に使う閾値。 葉の場合は-2
threshold = clf.tree_.threshold
print(threshold)
"""
[ 0.80000001 -2. 1.75 4.95000005 1.65000004 -2.
-2. 1.55000001 -2. -2. 4.85000014 -2.
-2. ]
"""
要するに、各ノードが配列の要素に対応しており、
それぞれ配列に、左の子ノード、右の子ノード、分割に使う特徴量、分割に使う閾値が順番に入っています。
これらの情報を日本語に変化して表示すると次の様になるでしょうか。
for i in range(n_nodes):
print("\nノード番号:", i)
if children_left[i] == -1:
print(" このノードは葉です。")
print(" 予測結果: ")
for v, t in zip(node_values[i][0], iris.target_names):
print(" "+t+": ", round(v/sum(node_values[i][0]), 3))
else:
print(
" "+iris.feature_names[feature[i]],
"が",
round(threshold[i], 3),
"未満の場合、ノード:",
children_left[i],
"に進み、それ以外の場合は、",
children_right[i],
"に進む。"
)
出力結果のテキストはこちらです。
ノード番号: 0
petal width (cm) が 0.8 未満の場合、ノード: 1 に進み、それ以外の場合は、 2 に進む。
ノード番号: 1
このノードは葉です。
予測結果:
setosa: 1.0
versicolor: 0.0
virginica: 0.0
ノード番号: 2
petal width (cm) が 1.75 未満の場合、ノード: 3 に進み、それ以外の場合は、 10 に進む。
ノード番号: 3
petal length (cm) が 4.95 未満の場合、ノード: 4 に進み、それ以外の場合は、 7 に進む。
ノード番号: 4
petal width (cm) が 1.65 未満の場合、ノード: 5 に進み、それ以外の場合は、 6 に進む。
ノード番号: 5
このノードは葉です。
予測結果:
setosa: 0.0
versicolor: 1.0
virginica: 0.0
ノード番号: 6
このノードは葉です。
予測結果:
setosa: 0.0
versicolor: 0.0
virginica: 1.0
ノード番号: 7
petal width (cm) が 1.55 未満の場合、ノード: 8 に進み、それ以外の場合は、 9 に進む。
ノード番号: 8
このノードは葉です。
予測結果:
setosa: 0.0
versicolor: 0.0
virginica: 1.0
ノード番号: 9
このノードは葉です。
予測結果:
setosa: 0.0
versicolor: 0.667
virginica: 0.333
ノード番号: 10
petal length (cm) が 4.85 未満の場合、ノード: 11 に進み、それ以外の場合は、 12 に進む。
ノード番号: 11
このノードは葉です。
予測結果:
setosa: 0.0
versicolor: 0.333
virginica: 0.667
ノード番号: 12
このノードは葉です。
予測結果:
setosa: 0.0
versicolor: 0.0
virginica: 1.0
先日可視化した結果とバッチリ対応していますね。 |
TensorFlow 1 version View source on GitHub
An estimator for TensorFlow linear models with user-specified head.
tf.estimator.LinearEstimator(
head, feature_columns, model_dir=None, optimizer='Ftrl', config=None,
sparse_combiner='sum', warm_start_from=None
)
Example:
categorical_column_a = categorical_column_with_hash_bucket(...)
categorical_column_b = categorical_column_with_hash_bucket(...)
categorical_feature_a_x_categorical_feature_b = crossed_column(...)
# Estimator using the default optimizer.
estimator = tf.estimator.LinearEstimator(
head=tf.estimator.MultiLabelHead(n_classes=3),
feature_columns=[categorical_column_a,
categorical_feature_a_x_categorical_feature_b])
# Or estimator using an optimizer with a learning rate decay.
estimator = tf.estimator.LinearEstimator(
head=tf.estimator.MultiLabelHead(n_classes=3),
feature_columns=[categorical_column_a,
categorical_feature_a_x_categorical_feature_b],
optimizer=lambda: tf.keras.optimizers.Ftrl(
learning_rate=tf.compat.v1.train.exponential_decay(
learning_rate=0.1,
global_step=tf.compat.v1.train.get_global_step(),
decay_steps=10000,
decay_rate=0.96))
# Or estimator using the FTRL optimizer with regularization.
estimator = tf.estimator.LinearEstimator(
head=tf.estimator.MultiLabelHead(n_classes=3),
feature_columns=[categorical_column_a,
categorical_feature_a_x_categorical_feature_b])
optimizer=tf.keras.optimizers.Ftrl(
learning_rate=0.1,
l1_regularization_strength=0.001
))
def input_fn_train:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_eval:
# Returns tf.data.Dataset of (x, y) tuple where y represents label's class
# index.
pass
def input_fn_predict:
# Returns tf.data.Dataset of (x, None) tuple.
pass
estimator.train(input_fn=input_fn_train, steps=100)
metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10)
predictions = estimator.predict(input_fn=input_fn_predict)
Input of train and evaluate should have following features,otherwise there will be a KeyError:
if weight_columnis notNone, a feature withkey=weight_columnwhose value is aTensor.
for each columninfeature_columns:
if columnis aCategoricalColumn, a feature withkey=column.namewhosevalueis aSparseTensor.
if columnis aWeightedCategoricalColumn, two features: the first withkeythe id column name, the second withkeythe weight column name. Both features'valuemust be aSparseTensor.
if columnis aDenseColumn, a feature withkey=column.namewhosevalueis aTensor.
if
Loss and predicted output are determined by the specified head.
Args
head A Head instance constructed with a method such astf.estimator.MultiLabelHead.
feature_columns An iterable containing all the feature columns used bythe model. All items in the set should be instances of classes derivedfrom FeatureColumn.
model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model.
optimizer An instance of tf.keras.optimizers.* used to train the model.Can also be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp','SGD'), or callable. Defaults to FTRL optimizer.
config RunConfig object to configure the runtime settings.
sparse_combiner A string specifying how to reduce if a categorical columnis multivalent. One of "mean", "sqrtn", and "sum" -- these areeffectively different ways to do example-level normalization, which canbe useful for bag-of-words features. for more details, seetf.feature_column.linear_model.
warm_start_from A string filepath to a checkpoint to warm-start from, ora WarmStartSettings object to fully configure warm-starting. If thestring filepath is provided instead of a WarmStartSettings, then allweights and biases are warm-started, and it is assumed that vocabulariesand Tensor names are unchanged.
Eager Compatibility
Estimators can be used while eager execution is enabled. Note that input_fnand all hooks are executed inside a graph context, so they have to be writtento be compatible with graph mode. Note that input_fn code using tf.datagenerally works in both graph and eager modes.
Attributes
config
export_savedmodel
model_dir
model_fn Returns the model_fn which is bound to self.params.
params
Methods
eval_dir
eval_dir(
name=None
)
Shows the directory name where evaluation metrics are dumped.
Args
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns
A string which is the path of directory contains evaluation metrics.
evaluate
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name=None
)
Evaluates the model given evaluation data input_fn.
For each step, calls input_fn, which returns one batch of data.Evaluates until:
stepsbatches are processed, or
input_fnraises an end-of-input exception (tf.errors.OutOfRangeErrororStopIteration).
Args
input_fn A function that constructs the input data for evaluation. SeePremade Estimatorsfor more information. The function should construct and return one ofthe following:
steps Number of steps for which to evaluate model. If None, evaluatesuntil input_fn raises an end-of-input exception.
hooks List of tf.train.SessionRunHook subclass instances. Used forcallbacks inside the evaluation call.
checkpoint_path Path of a specific checkpoint to evaluate. If None, thelatest checkpoint in model_dir is used. If there are no checkpointsin model_dir, evaluation is run with newly initialized Variablesinstead of ones restored from checkpoint.
name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns
A dict containing the evaluation metrics specified in model_fn keyed byname, as well as an entry global_step which contains the value of theglobal step for which this evaluation was performed. For cannedestimators, the dict contains the loss (mean loss per mini-batch) andthe average_loss (mean loss per sample). Canned classifiers also returnthe accuracy. Canned regressors also return the label/mean and theprediction/mean.
Raises
ValueError If steps <= 0.
experimental_export_all_saved_models
experimental_export_all_saved_models(
export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False,
checkpoint_path=None
)
Exports a SavedModel with tf.MetaGraphDefs for each requested mode.
For each mode passed in via the input_receiver_fn_map,this method builds a new graph by calling the input_receiver_fn to obtainfeature and label Tensors. Next, this method calls the Estimator'smodel_fn in the passed mode to generate the model graph based onthose features and labels, and restores the given checkpoint(or, lacking that, the most recent checkpoint) into the graph.Only one of the modes is used for saving variables to the SavedModel(order of preference: tf.estimator.ModeKeys.TRAIN,tf.estimator.ModeKeys.EVAL, thentf.estimator.ModeKeys.PREDICT), such that up to threetf.MetaGraphDefs are saved with a single set of variables in a singleSavedModel directory.
For the variables and tf.MetaGraphDefs, a timestamped export directorybelow export_dir_base, and writes a SavedModel into it containing thetf.MetaGraphDef for the given mode and its associated signatures.
For prediction, the exported MetaGraphDef will provide one SignatureDeffor each element of the export_outputs dict returned from the model_fn,named using the same keys. One of these keys is alwaystf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY,indicating which signature will be served when a serving request does notspecify one. For each signature, the outputs are provided by thecorresponding tf.estimator.export.ExportOutputs, and the inputs are alwaysthe input receivers provided by the serving_input_receiver_fn.
For training and evaluation, the train_op is stored in an extracollection, and loss, metrics, and predictions are included in aSignatureDef for the mode in question.
Extra assets may be written into the SavedModel via the assets_extraargument. This should be a dict, where each key gives a destination path(including the filename) relative to the assets.extra directory. Thecorresponding value gives the full path of the source file to be copied.For example, the simple case of copying a single file without renaming itis specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
Args
export_dir_base A string containing a directory in which to createtimestamped subdirectories containing exported SavedModels.
input_receiver_fn_map dict of tf.estimator.ModeKeys toinput_receiver_fn mappings, where the input_receiver_fn is afunction that takes no arguments and returns the appropriate subclass ofInputReceiver.
assets_extra A dict specifying how to populate the assets.extra directorywithin the exported SavedModel, or None if no extra assets areneeded.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default),the most recent checkpoint found within the model directory is chosen.
Returns
The path to the exported directory as a bytes object.
Raises
ValueError if any input_receiver_fn is None, no export_outputsare provided, or no checkpoint can be found.
export_saved_model
export_saved_model(
export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False,
checkpoint_path=None, experimental_mode=ModeKeys.PREDICT
)
Exports inference graph as a SavedModel into the given dir.
For a detailed guide, see SavedModel from Estimators.
This method builds a new graph by first calling theserving_input_receiver_fn to obtain feature Tensors, and then callingthis Estimator's model_fn to generate the model graph based on thosefeatures. It restores the given checkpoint (or, lacking that, the mostrecent checkpoint) into this graph in a fresh session. Finally it createsa timestamped export directory below the given export_dir_base, and writesa SavedModel into it containing a single tf.MetaGraphDef saved from thissession.
The exported MetaGraphDef will provide one SignatureDef for eachelement of the export_outputs dict returned from the model_fn, namedusing the same keys. One of these keys is alwaystf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY,indicating which signature will be served when a serving request does notspecify one. For each signature, the outputs are provided by thecorresponding tf.estimator.export.ExportOutputs, and the inputs are alwaysthe input receivers provided by the serving_input_receiver_fn.
Extra assets may be written into the SavedModel via the assets_extraargument. This should be a dict, where each key gives a destination path(including the filename) relative to the assets.extra directory. Thecorresponding value gives the full path of the source file to be copied.For example, the simple case of copying a single file without renaming itis specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}.
The experimental_mode parameter can be used to export a singletrain/eval/predict graph as a SavedModel.See experimental_export_all_saved_models for full docs.
Args
export_dir_base A string containing a directory in which to createtimestamped subdirectories containing exported SavedModels.
serving_input_receiver_fn A function that takes no argument and returns atf.estimator.export.ServingInputReceiver ortf.estimator.export.TensorServingInputReceiver.
assets_extra A dict specifying how to populate the assets.extra directorywithin the exported SavedModel, or None if no extra assets areneeded.
as_text whether to write the SavedModel proto in text format.
checkpoint_path The checkpoint path to export. If None (the default),the most recent checkpoint found within the model directory is chosen.
experimental_mode tf.estimator.ModeKeys value indicating with mode willbe exported. Note that this feature is experimental.
Returns
The path to the exported directory as a bytes object.
Raises
ValueError if no serving_input_receiver_fn is provided, noexport_outputs are provided, or no checkpoint can be found.
get_variable_names
get_variable_names()
Returns list of all variable names in this model.
Returns
List of names.
Raises
ValueError If the Estimator has not produced a checkpoint yet.
get_variable_value
get_variable_value( name)
Returns value of the variable given by name.
Args
name string or a list of string, name of the tensor.
Returns
Numpy array - value of the tensor.
Raises
ValueError If the Estimator has not produced a checkpoint yet.
latest_checkpoint
latest_checkpoint()
Finds the filename of the latest saved checkpoint file in model_dir.
Returns
The full path to the latest checkpoint or None if no checkpoint wasfound.
predict
predict(
input_fn, predict_keys=None, hooks=None, checkpoint_path=None,
yield_single_examples=True
)
Yields predictions for given features.
Please note that interleaving two predict outputs does not work. See: issue/20506
Args
input_fn A function that constructs the features. Prediction continuesuntil input_fn raises an end-of-input exception(tf.errors.OutOfRangeError or StopIteration). See PremadeEstimatorsfor more information. The function should construct and return one ofthe following:
predict_keys list of str, name of the keys to predict. It is used ifthe tf.estimator.EstimatorSpec.predictions is a dict. Ifpredict_keys is used then rest of the predictions will be filteredfrom the dictionary. If None, returns all.
hooks List of tf.train.SessionRunHook subclass instances. Used forcallbacks inside the prediction call.
checkpoint_path Path of a specific checkpoint to predict. If None, thelatest checkpoint in model_dir is used. If there are no checkpointsin model_dir, prediction is run with newly initialized Variablesinstead of ones restored from checkpoint.
yield_single_examples If False, yields the whole batch as returned bythe model_fn instead of decomposing the batch into individualelements. This is useful if model_fn returns some tensors whose firstdimension is not equal to the batch size.
Yields
Evaluated values of predictions tensors.
Raises
ValueError If batch length of predictions is not the same andyield_single_examples is True.
ValueError If there is a conflict between predict_keys andpredictions. For example if predict_keys is not None buttf.estimator.EstimatorSpec.predictions is not a dict.
train
train(
input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None
)
Trains a model given training data input_fn.
Args
input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: |
ä¼åå¨å¯è½æ¯æ·±åº¦å¦ä¹ æâçå¦âçä¸ä¸ªæ¨¡åä¹ä¸äºï¼ææ¶åæ¢ä¸ä¸ªä¼åå¨å°±è½å¸¦æ¥ææ¾çæåï¼ææ¶åå«äººè¯´æåå¾å¤çä¼åå¨ç¨å°èªå·±çä»»å¡ä¸å´ä¸ä¸ç¹ç¨é½æ²¡æï¼ç论æ§è´¨å¥½çä¼åå¨ä¸ä¸å®å·¥ä½å¾å¾å¥½ï¼çº¯ç²¹æèè¢èæ¥çä¼åå¨ä¹æªå¿ 就差äºãä½ä¸ç®¡ææ ·ï¼ä¼åå¨ç»ç©¶ä¹ä¸ºçç±â深度ç¼ä¸¹âçå妿ä¾äºå¤ä¸ä¸ªéæ©ã
è¿å å¹´æ¥ï¼å ³äºä¼åå¨çå·¥ä½ä¼¼ä¹ä¹å¨æ ¢æ ¢å¢å¤ï¼å¾å¤è®ºæé½æåºäºå¯¹å¸¸ç¨ä¼åå¨ï¼å°¤å ¶æ¯Adamï¼ç大大å°å°çæ¹è¿ãæ¬æå°±æ±æ»ä¸äºä¼åå¨å·¥ä½ææå·§ï¼å¹¶ç»ä¸ç»åºäºä»£ç å®ç°ï¼ä¾è¯»è æéè°ç¨ã
åºæ¬å½¢å¼ #
æè°âæ´¾çâï¼å°±æ¯æç¸å ³çæå·§é½æ¯å»ºç«å¨å·²æçä¼åå¨ä¸çï¼ä»»æä¸ä¸ªå·²æçä¼åå¨é½å¯ä»¥ç¨ä¸è¿äºæå·§ï¼ä»èåæä¸ä¸ªæ°çä¼åå¨ã
å·²æçä¼åå¨çåºæ¬å½¢å¼ä¸ºï¼
\begin{equation}\begin{aligned}\boldsymbol{g}_t =&\, \nabla_{\boldsymbol{\theta}} L\\
\boldsymbol{h}_t =&\, f(\boldsymbol{g}_{\leq t})\\
\boldsymbol{\theta}_{t+1} =&\, \boldsymbol{\theta}_t - \gamma \boldsymbol{h}_t
\end{aligned}\end{equation}
å
¶ä¸$\boldsymbol{g}_t$峿¢¯åº¦ï¼è$\boldsymbol{g}_{\leq t}$æçæ¯æªæ¢å°å½åæ¥çæææ¢¯åº¦ä¿¡æ¯ï¼å®ä»¬ç»è¿æç§è¿ç®$f$ï¼æ¯å¦ç´¯ç§¯å¨éã累积äºé¶ç©æ ¡æ£å¦ä¹ ççï¼åå¾å°$\boldsymbol{h}_t$ï¼ç¶åç±$\boldsymbol{h}_t$æ¥æ´æ°åæ°ï¼è¿éç$\gamma$å°±æ¯æå¦ä¹ çã
åå¼æç© #
ä¸é¢ä»ç»ä¼åå¨ç6个åå¼ï¼ä¹å¯ä»¥ç解为ç¨ä¼å卿¶çä¸äºæå·§ãè¿äºæå·§ææ¶åä¼å¾ææï¼ææ¶åä¹å¯è½æ æçè³åä½ç¨ï¼ä¸è½ä¸æ¦è论ï¼åªè½ç解为å¤ä¸ç§éæ©å°±å¤ä¸ç§å¯è½ã
æéè¡°å #
æéè¡°åæçæ¯ç´æ¥å¨ä¼å卿¯ä¸æ¥æ´æ°åé¢å ä¸ä¸ä¸ªè¡°å项ï¼
\begin{equation}\begin{aligned}\boldsymbol{g}_t =&\, \nabla_{\boldsymbol{\theta}} L\\
\boldsymbol{h}_t =&\, f(\boldsymbol{g}_{\leq t})\\
\boldsymbol{\theta}_{t+1} =&\, \boldsymbol{\theta}_t - \gamma \boldsymbol{h}_t - \gamma \lambda \boldsymbol{\theta}_t
\end{aligned}\end{equation}
å
¶ä¸$\lambda$称为âè¡°åçâãå¨SGDä¸ï¼æéè¡°åçä»·äºå¾losséè¾¹å å
¥$l_2$æ£å项$\frac{1}{2}\lambda \Vert \boldsymbol{\theta}\Vert_2^2$ï¼ä½å¨AdagradãAdamç带æèªéåºå¦ä¹ ççä¼åå¨ä¸ï¼$f$åæäºé线æ§ï¼æä»¥ä¸¤è
ä¸å¨çä»·ããDecoupled Weight Decay Regularizationã䏿ç¹å«æåºæéè¡°åçé²è¿æåè½åä¼äºå¯¹åºç$l_2$æ£åï¼æ¨èå¤§å®¶ä½¿ç¨æéè¡°åè䏿¯$l_2$æ£åã
å±èªéåº #
å¨ä¼åå¨ä¸ï¼æåçæ´æ°éç±$\boldsymbol{h}_t$åå¦ä¹ ç$\gamma$å³å®ï¼ææ¶å$\boldsymbol{h}_t$çæ¨¡é¿ä¼å¤§äºåæ°$\boldsymbol{\theta}_t$çæ¨¡é¿ï¼è¿å¯è½ä¼å¯¼è´æ´æ°çä¸ç¨³å®ãæä»¥ï¼ä¸ä¸ªç´æ¥çæ³æ³æ¯ï¼æ¯ä¸å±çåæ°çæ´æ°å¹
度åºè¯¥ç±$\boldsymbol{\theta}_t$çæ¨¡é¿çæ¨¡é¿æ¥è°æ§ãè¿ä¸ªç´æ¥çæ³æ³å°±å¯¼è´äºå¦ä¸çä¼åå¨åä½ï¼
\begin{equation}\begin{aligned}\boldsymbol{g}_t =&\, \nabla_{\boldsymbol{\theta}} L\\
\boldsymbol{h}_t =&\, f(\boldsymbol{g}_{\leq t})\\
\boldsymbol{\theta}_{t+1} =&\, \boldsymbol{\theta}_t - \gamma \boldsymbol{h}_t\times \frac{\Vert\boldsymbol{\theta}_t\Vert_2}{\Vert\boldsymbol{h}_t\Vert_2}
\end{aligned}\end{equation}
妿åºç¡ä¼å卿¯Adamï¼é£ä¹ä¸è¿°ä¼åå¨å°±æ¯LAMBã论æãLarge Batch Optimization for Deep Learning: Training BERT in 76 minutesãæåºLAMBå¨batch sizeè¾å¤§ï¼æåä¸ä¸ï¼çæ¶åæ¯Adamææè¦å¥½ã
åæ®µçº¿æ§å¦ä¹ ç #
å¦ä¹ ç乿¯ä¼åå¨ä¸çä¸ä¸ªè¿·ä¹åå¨ï¼é常æ¥è¯´ç²¾è°å¦ä¹ ççç¥è½å¤è·å¾ä¸å®æåï¼è䏿°å½çå¦ä¹ ççè³å¯è½å¯¼è´æ¨¡å䏿¶æã常è§çå¦ä¹ ççç¥æwarmupãææ°è¡°åãæå±å¼ä¸éï¼æ¯å¦æä¸ªepochåç´æ¥éå°åæ¥ç1/10ï¼çï¼æ¯è¾è¿·çè¿æcosåå¦ä¹ ççç¥ãå¤é¡¹å¼åå¦ä¹ ççç¥çã
èèå°å¸¸è§ç彿°é½å¯ä»¥ç¨å段线æ§å½æ°é¼è¿ï¼æä»¥ç¬è
å¹²èå¼å
¥äºä¸ä¸ªå段线æ§å¦ä¹ çççç¥ï¼ä¾å¤§å®¶é便ç©ãå½¢å¼å¦ä¸ï¼
\begin{equation}\begin{aligned}\boldsymbol{g}_t =&\, \nabla_{\boldsymbol{\theta}} L\\
\boldsymbol{h}_t =&\, f(\boldsymbol{g}_{\leq t})\\
\boldsymbol{\theta}_{t+1} =&\, \boldsymbol{\theta}_t - \gamma \rho_t\boldsymbol{h}_t
\end{aligned}\end{equation}
å
¶ä¸$\rho_t$æ¯æä¸ªä»¥æ¥æ°$t$为èªåéçåæ®µçº¿æ§å½æ°ã
梯度累积 #
梯度累积ä¹åå¨ãç¨æ¶é´æ¢åææï¼Keras梯度累积ä¼åå¨ã䏿ä¸ä¹ä»ç»è¿äºï¼å ¶å®ä¸ç®ä¼åå¨çåå¼ï¼ä½å¯ä»¥åå°ä¼åå¨ä¸ï¼éè¿å°batch sizeè¾¾å°å¤§batch sizeçææï¼å®ç°æ¶é´æ¢ç©ºé´ãæ´å¤§çbatch sizeææ¶åè½æåææï¼å°¤å ¶æ¯åºåbatch sizeè¿å°çæ åµä¸ï¼8以ä¸ï¼ï¼ã
éè¿°ãç¨æ¶é´æ¢åææï¼Keras梯度累积ä¼åå¨ãä¸æå ³äºæ¢¯åº¦ä¸éçæè¿°ï¼
æè°æ¢¯åº¦ç´¯ç§¯ï¼å ¶å®å¾ç®åï¼æä»¬æ¢¯åº¦ä¸éæç¨ç梯度ï¼å®é 䏿¯å¤ä¸ªæ ·æ¬ç®åºæ¥ç梯度çå¹³åå¼ï¼ä»¥batch_size=128为ä¾ï¼ä½ å¯ä»¥ä¸æ¬¡æ§ç®åº128ä¸ªæ ·æ¬ç梯度ç¶åå¹³åï¼æä¹å¯ä»¥æ¯æ¬¡ç®16ä¸ªæ ·æ¬ç平忢¯åº¦ï¼ç¶åç¼åç´¯å èµ·æ¥ï¼ç®å¤äº8次ä¹åï¼ç¶åææ»æ¢¯åº¦é¤ä»¥8ï¼ç¶åææ§è¡åæ°æ´æ°ãå½ç¶ï¼å¿ 须累积å°äº8次ä¹åï¼ç¨8次ç平忢¯åº¦æå»æ´æ°åæ°ï¼ä¸è½æ¯ç®16ä¸ªå°±å»æ´æ°ä¸æ¬¡ï¼ä¸ç¶å°±æ¯batch_size=16äºã
Lookahead #
Lookaheadä¼å卿¥èªè®ºæãLookahead Optimizer: k steps forward, 1 step backãï¼å¨ä¹åçæç« ãKeraså®ç°ä¸¤ä¸ªä¼åå¨ï¼LookaheadåLazyOptimizerãä¹ä»ç»è¿ãLookaheadçå«ä¹æ¯ç¨å¸¸ç¨ä¼åå¨é¢å
å¾åæ¸ç´¢å æ¥ï¼ç¶åæ ¹æ®æ¸ç´¢ç»ææ¥æ´æ°ï¼æµç¨å¦ä¸ï¼
\begin{equation}\begin{aligned}&\boldsymbol{g}_t =\, \nabla_{\boldsymbol{\theta}} L\\
&\boldsymbol{h}_t =\, f(\boldsymbol{g}_{\leq t})\\
&\boldsymbol{\theta}_{t+1} =\, \boldsymbol{\theta}_t - \gamma\boldsymbol{h}_t\\
&\text{妿}t\,\text{mod}\,k = 0\text{:}\\
&\qquad\boldsymbol{\Theta}_{t+1} = \boldsymbol{\Theta}_t + \alpha (\boldsymbol{\theta}_{t+1}- \boldsymbol{\Theta}_t)\\
&\qquad\boldsymbol{\theta}_{t+1} = \boldsymbol{\Theta}_{t+1} \,(\text{å³è¦ç忥ç}\boldsymbol{\theta}_{t+1})
\end{aligned}\end{equation}
å ¶å®è¿ä¸ªä¼åå¨å«Lookaback乿 妨ï¼ä¹å°±æ¯æ¯èµ°å æ¥å°±å¾åççï¼è·å æ¥åçæéå个æå¼ã
Lazyä¼åå¨ #
Lazyä¼åå¨å¨åæçæç« ãKeraså®ç°ä¸¤ä¸ªä¼åå¨ï¼LookaheadåLazyOptimizerãä¸ä¹ä»ç»è¿ï¼å ¶æ¬æå°±æ¯Embeddingå±çæ´æ°åºå½ç¨çåä¸äºï¼è¿æå©äºé²æ¢è¿æåãï¼åèç¥ä¹è®¨è®ºï¼
åèå®ç° #
åé¢çä»ç»æ¯è¾ç®åï¼äºå®ä¸è¿äºå弿¬èº«ç¡®å®ä¸é¾çè§£ï¼å ³é®è¿æ¯ä»£ç å®ç°ãä»åé¢çä»ç»ä¸å¤§å®¶å¯ä»¥åç°ï¼è¿6个åå¼å¹¶æ çç¾ä¹å¤ï¼å æ¤è¯å¥½çå®ç°åºå½è½è®©æä»¬è½æç§¯æ¨è¬ç»åå ¶ä¸ä¸ä¸ªæå¤ä¸ªåå¼ä½¿ç¨ï¼å¦å¤ï¼ç®åkerasä¹æä¸¤ä¸ªåæ¯ï¼çº¯kerasåtf.kerasï¼è¯å¥½çå®ç°åºå½è½åæ¶å ¼å®¹å®ä»¬ï¼æè åæ¶ç»åºä¸¤ç§å®ç°ï¼ã
ç»æç§»è±æ¥æ¨ #
è½ç¶é¨ååå¼å¨ä¹åä¹å®ç°è¿äºï¼ä½è¿éè¿æ¯ç¨æ°çæ¹å¼éæ°å®ç°äºå®ä»¬ãè¿éçå®ç°æ¹å¼æºäºç¬è æå¤åç°çä¸ç§ç§»è±æ¥æ¨æå·§ã
å设æä»¬æè¿æ ·ä¸ä¸ªç±»ï¼
import numpy as np
class A(object):
def __init__(self):
self.a = np.ones(1)
self.b = np.ones(2)
self.c = np.ones(3)
ç¶åå设æä»¬è¦ç»§æ¿Aç±»å¾å°ä¸ä¸ªBç±»ï¼Bç±»æ¯è¦å°__init__æ¹æ³çæænp.onesæ¿æ¢ä¸ºnp.zerosï¼å
¶ä½é½ä¸åãç±äº__init__å¯è½æ¯ä¸ä¸ªå¾å¤æçæµç¨ï¼å¦æå°å®å®æ´å¤å¶è¿æ¥ç¶åæ¹åæ¾ç¶å¤ªåä½äºã
ææ²¡æç´æ¥åå è¡ä»£ç å°±è½æ¿æ¢ææçå¢ï¼è¿çæï¼
class B(A):
def __init__(self):
_ = np.ones
np.ones = np.zeros
super(B, self).__init__()
np.ones = _
æäºè¿ä¸ªdemoï¼æä»¬å°±å¯ä»¥âéæ¹âå·²æä¼åå¨äºãå¨kerasä¸ï¼åæ°çæ´æ°é½æ¯éè¿K.updateæ¥å®ç°çï¼åèkerasçoptimizers.py)ï¼æä»¬åªéè¦ç¨ä¸è¿°æ¹å¼éæ°å®ä¹ä¸ä¸K.update就好ã
妿æ¯tf.keraså¢ï¼å¾éæ¾ï¼è¿ç§æ¹å¼ä¸å¯è¡ï¼å 为tf.kerasä¸å¸¸ç¨ä¼åå¨çè¿ä»£æµç¨é½è¢«åå°Céè¾¹å»äºï¼åètf.kerasçadam.pyï¼ï¼æä»¬çä¸å°ä»£ç ï¼ä¹å°±ä¸è½ç¨è¿ç§æ¹æ³æ¹äºãä¸ä¸ªè§£å³åæ³æ¯æä»¬èªå·±éæ°å®ç°ä¸ä¸ªä¼åå¨å¦Adamï¼å°è¿ä»£æµç¨æ´é²åºæ¥ï¼è¿æ ·æä»¬å°±è½ç¨ä¸è¿°æ¹å¼éæ¹äºã
使ç¨ç¤ºä¾ #
æ ¹æ®ä¸è¿°æè·¯ç»ä¸å®ç°ç6个ä¼åå¨åä½ï¼é½è¢«æ¾å¨ç¬è
çbert4keras项ç®ä¸ï¼bert4keras.optimizersã
ææå½æ°ä¼æ ¹æ®kerasè¿æ¯tf.kerasæ¥è¿è¡æ£ç¡®ç导å
¥ï¼ä»èå®ç°keras/tf.kerasé½è½ç¨åæ ·çæ¹å¼ä½¿ç¨ãéè¾¹èªå¸¦äºä¸ä¸ªAdamå®ç°ï¼è¿ä¸ªAdamæ¯ä¸é¨åç»tf.kerasçã对äºtf.kerasï¼å¦ææ³è¦å®ç°ä¸è¿°åå¼ï¼é£åªè½ç¨bert4kerasèªå¸¦çä¼åå¨ï¼ç®ååªæAdamï¼ï¼ä¸è½ç¨tf.keraså
ç½®çä¼åå¨ã
åè代ç ï¼
from bert4keras.optimizers import *
# å˜æˆå¸¦æƒé‡è¡°å‡çš„Adam
AdamW = extend_with_weight_decay(Adam, 'AdamW')
optimizer = AdamW(learning_rate=0.001, weight_decay_rate=0.01)
# å˜æˆå¸¦åˆ†æ®µçº¿æ€§å¦ä¹ 率的Adam
AdamLR = extend_with_piecewise_linear_lr(Adam, 'AdamLR')
# 实现warmup,å‰1000æ¥å¦ä¹ 率从0å¢žåŠ åˆ°0.001
optimizer = AdamLR(learning_rate=0.001, lr_schedule={1000: 1.})
# å˜æˆå¸¦æ¢¯åº¦ç´¯ç§¯çš„Adam
AdamGA = extend_with_gradient_accumulation(Adam, 'AdamGA')
optimizer = AdamGA(learning_rate=0.001, grad_accum_steps=10)
# 组åˆä½¿ç”¨
AdamWLR = extend_with_piecewise_linear_lr(AdamW, 'AdamWLR')
# 带æƒé‡è¡°å‡å’Œwarmup的优化器
optimizer = AdamWLR(learning_rate=0.001,
weight_decay_rate=0.01,
lr_schedule={1000: 1.})
ï¼æ³¨ï¼ä¸ä¸å®ç°è¿ä¹å¤ä¸ªä¼åå¨ï¼èä¸åæ¶è¦å
¼å®¹kerasåtf.kerasï¼é¾å
å¯è½ä¼æéæ¼ä¹å¤ï¼å¦æåç°ï¼ä¸æä¸åææ£ãï¼
å卿å #
ç¼ä¸¹ä¸æï¼ä¸ç¼ä¸çæã
转载å°è¯·å
æ¬æ¬æå°åï¼https://kexue.fm/archives/7094
æ´è¯¦ç»ç转载äºå®è¯·åèï¼ãç§å¦ç©ºé´FAQã
妿æ¨è§å¾æ¬æè¿ä¸éï¼æ¬¢è¿å享/æèµæ¬æãæèµå¹¶éè¦ä»ä¸è·å¾æ¶çï¼èæ¯å¸æç¥éç§å¦ç©ºé´è·å¾äºå¤å°è¯»è
ççå¿å
³æ³¨ãå½ç¶ï¼å¦æä½ æ è§å®ï¼ä¹ä¸ä¼å½±åä½ çé
读ã忬¡è¡¨ç¤ºæ¬¢è¿åæè°¢ï¼
妿æ¨éè¦å¼ç¨æ¬æï¼è¯·åèï¼
èåæ. (Nov. 25, 2019). ã6个派çä¼åå¨çç®åä»ç»åå ¶å®ç° ã[Blog post]. Retrieved from https://kexue.fm/archives/7094
ä½ ä¹è®¸è¿å¯¹ä¸é¢çå 容æå ´è¶£
ä»å¨åå¦è§åº¦çä¼åç®æ³ï¼äºï¼ï¼ä¸ºä»ä¹å¦ä¹ çä¸å®è¿å°ï¼
æ®éåå½ççç¥æ¢¯åº¦ä¸é¶é¶ä¼å
ä»éæ ·çä¼åï¼å¯å¯¼ä¼åä¸ä¸å¯å¯¼ä¼åçç»ä¸è§è§
AdaXä¼å卿µ æï¼é弿ºå®ç°ï¼
èçæ¾åçéè®¡ç®æå·§ä¹æäºKerasçäº
bert4keras卿ï¼baselineææ
AdaFactorä¼å卿µ æï¼é弿ºå®ç°ï¼
对æè®ç»æµ è°ï¼æä¹ãæ¹æ³åæèï¼éKeraså®ç°ï¼
Kerasï¼Tensorflowçé»éæ å
ä»ä¹æ¶åå¤è¿ç¨çå 鿝å¯ä»¥å¤§äº1ï¼ |
本文属于机器翻译版本。若本译文内容与英语原文存在差异,则一律以英文原文为准。
sam 本地 start-lambda
使您能够使用 AWS CLI 或 SDKs 以编程方式在本地调用 Lambda 函数。 此命令启动模拟 AWS Lambda 的本地终端节点。您可以针对此本地 Lambda 终端节点运行自动测试。当您使用 AWS CLI 或开发工具包向此终端节点发送调用时,它会在本地执行在请求中指定的 Lambda 函数。
要在本地测试使用 Lambda 扩展的无服务器应用程序,请将 ENABLE_LAMBDA_EXTENSIONS_PREVIEW 环境变量设置为“1”。例如:
ENABLE_LAMBDA_EXTENSIONS_PREVIEW=1 sam local start-lambda
有关 Lambda 扩展的更多信息,请参阅 中的AWS Lambda使用 扩展。AWS Lambda Developer Guide
用量:
sam local start-lambda [OPTIONS]
示例:
# SETUP
# ------
# Start the local Lambda endpoint by running this command in the directory that contains your AWS SAM template.
sam local start-lambda
# USING AWS CLI
# -------------
# Then, you can invoke your Lambda function locally using the AWS CLI
aws lambda invoke --function-name "HelloWorldFunction" --endpoint-url "http://127.0.0.1:3001" --no-verify-ssl out.txt
# USING AWS SDK
# -------------
# You can also use the AWS SDK in your automated tests to invoke your functions programatically.
# Here is a Python example:
#
# self.lambda_client = boto3.client('lambda',
# endpoint_url="http://127.0.0.1:3001",
# use_ssl=False,
# verify=False,
# config=Config(signature_version=UNSIGNED,
# read_timeout=0,
# retries={'max_attempts': 0}))
# self.lambda_client.invoke(FunctionName="HelloWorldFunction")
选项:
选项 描述
--host TEXT 要绑定到的本地主机名或 IP 地址 (默认值:'127.0.0.1')。
-p, --port INTEGER 要侦听的本地端口号 (默认值: '3001')。
-t, --template PATH 模板文件 [default: template.[yaml|yml]]。AWS SAM
-n, --env-vars PATH 包含 Lambda 函数的环境变量值的 JSON 文件。
--parameter-overrides 可选。包含编码为键值对的 AWS CloudFormation 参数的字符串覆盖。使用与 AWS CLI 相同的格式 — 例如,'ParameterKey=KeyPairName, ParameterValue=MyKey ParameterKey=InstanceType,ParameterValue=t1.micro'。
-d, --debug-port TEXT 如果指定,则以调试模式启动 Lambda 函数容器,并在本地主机上公开此端口。
--debugger-path TEXT 要挂载到 Lambda 容器中的调试程序的主机路径。
--debug-args TEXT 要传递到调试程序的其他参数。
--warm-containers [EAGER | LAZY]
可选。指定 AWS SAM CLI 如何管理每个函数的容器。
提供两个选项:
--debug-function
可选。指定在指定 Lambda 时要将调试选项应用到的
-v, --docker-volume-basedir TEXT 文件所在基目录的位置。AWS SAM如果 Docker 正在远程计算机上运行,则必须挂载 Docker 计算机上 AWS SAM 文件所在的路径,并修改此值以匹配远程计算机。
--docker-network TEXT Lambda Docker 容器应连接到的现有 Docker 网络的名称或 ID 以及默认桥接网络。如果指定此项,则 Lambda 容器仅连接到默认桥接 Docker 网络。
--container-env-vars 可选。在本地调试时将环境变量传递到映像容器。
-l, --log-file TEXT 要将运行时日志发送到的日志文件。
--layer-cache-basedir DIRECTORY 指定您的模板使用的层下载到的位置依据。
--skip-pull-image 指定 CLI 是否应跳过为 Lambda 运行时下拉最新的 Docker 映像。
--force-image-build 指定 CLI 是否应重新生成用于通过层调用函数的图像。
--profile TEXT 获取 AWS 凭证的凭证文件中的特定配置文件。
--region TEXT 要部署到的 AWS 区域。例如,us-east-1。
--config-file PATH 包含要使用的默认参数值的配置文件的路径和文件名。默认值为项目根目录中的“samconfig.toml”。有关配置文件的详细信息,请参阅 AWS SAM CLI 配置文件。
--config-env TEXT 配置文件中指定要使用的默认参数值的环境名称。默认值为“default”。有关配置文件的详细信息,请参阅 AWS SAM CLI 配置文件。
--debug 打开调试日志记录以输出由 AWS SAM CLI 生成的调试消息和显示时间戳。
--help 显示此消息并退出。 |
Internal Server Error: /auth/github/login/
Traceback (most recent call last):
File "/home/rick/origami-cloudcv/lib/python3.5/site-packages/django/core/handlers/exception.py", line 41, in inner
response = get_response(request)
File "/home/rick/origami-cloudcv/lib/python3.5/site-packages/django/core/handlers/base.py", line 187, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/home/rick/origami-cloudcv/lib/python3.5/site-packages/channels/handler.py", line 243, in process_exception_by_middleware
return super(AsgiHandler, self).process_exception_by_middleware(exception, request)
File "/home/rick/origami-cloudcv/lib/python3.5/site-packages/django/core/handlers/base.py", line 185, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/rick/origami-cloudcv/lib/python3.5/site-packages/allauth/socialaccount/providers/oauth2/views.py", line 73, in view
return self.dispatch(request, *args, **kwargs)
File "/home/rick/origami-cloudcv/lib/python3.5/site-packages/allauth/socialaccount/providers/oauth2/views.py", line 96, in dispatch
app = provider.get_app(self.request)
File "/home/rick/origami-cloudcv/lib/python3.5/site-packages/allauth/socialaccount/providers/base.py", line 52, in get_app
return SocialApp.objects.get_current(self.id, request)
File "/home/rick/origami-cloudcv/lib/python3.5/site-packages/allauth/socialaccount/models.py", line 40, in get_current
provider=provider)
File "/home/rick/origami-cloudcv/lib/python3.5/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/home/rick/origami-cloudcv/lib/python3.5/site-packages/django/db/models/query.py", line 380, in get
self.model._meta.object_name
allauth.socialaccount.models.DoesNotExist: SocialApp matching query does not exist.
Actually, I'm trying to figure out how to host a demo on the web application.. Can you please give me some hints how to do that??
I've been doing this:
@vpn1997 The Problem persists... I removed the repo... cloned it again... built the latest branch...
Hey. there's something bugging me... Whenever I go for the Create Demo or My Demo... The Following Link shows up...
http://localhost:8000/accounts/github/login/callback/?error=redirect_uri_mismatch&error_description=The+redirect_uri+MUST+match+the+registered+callback+URL+for+this+application.&error_uri=https%3A%2F%2Fdeveloper.github.com%2Fapps%2Fmanaging-oauth-apps%2Ftroubleshooting-authorization-request-errors%2F%23redirect-uri-mismatch&state=kY5I4QfO9vX2
Also... There's something new with the latest branch. The DB_USER and DB_PASS are not the username and password for django admin page
dpkg -l | grep postgres .This will give you all the postgresql packages then run sudo apt-get --purge remove < all package names with space in between>. Then run sudo rm -rf /var/lib/postgresql/sudo rm -rf /var/log/postgresql/sudo rm -rf /etc/postgresql/
Internal Server Error: /bundleup/1624217/13994201/
Traceback (most recent call last):
File "/home/rick/origami-cloudcv/lib/python3.5/site-packages/django/core/handlers/exception.py", line 41, in inner
response = get_response(request)
File "/home/rick/origami-cloudcv/lib/python3.5/site-packages/django/core/handlers/base.py", line 187, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/home/rick/origami-cloudcv/lib/python3.5/site-packages/channels/handler.py", line 243, in process_exception_by_middleware
return super(AsgiHandler, self).process_exception_by_middleware(exception, request)
File "/home/rick/origami-cloudcv/lib/python3.5/site-packages/django/core/handlers/base.py", line 185, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/rick/origami-cloudcv/lib/python3.5/site-packages/django/views/decorators/csrf.py", line 58, in wrapped_view
return view_func(*args, **kwargs)
File "/home/rick/origami-cloudcv/lib/python3.5/site-packages/django/views/generic/base.py", line 68, in view
return self.dispatch(request, *args, **kwargs)
File "/home/rick/origami-cloudcv/lib/python3.5/site-packages/rest_framework/views.py", line 489, in dispatch
response = self.handle_exception(exc)
File "/home/rick/origami-cloudcv/lib/python3.5/site-packages/rest_framework/views.py", line 449, in handle_exception
self.raise_uncaught_exception(exc)
File "/home/rick/origami-cloudcv/lib/python3.5/site-packages/rest_framework/views.py", line 486, in dispatch
response = handler(request, *args, **kwargs)
File "/home/rick/origami-cloudcv/lib/python3.5/site-packages/rest_framework/decorators.py", line 52, in handler
return func(*args, **kwargs)
File "/home/rick/origami-cloudcv/Origami/api/views.py", line 314, in bundleup
hash_.update(key)
TypeError: Unicode-objects must be encoded before hashing
@techytushar docker ps -a and removing containers didn't work. What worked was:λ ~* sudo netstat -nlpt |grep 6379 λ ~* sudo service redis-server stop
Also is there permanent thing for docker setup like: I can run the docker container when I get back.
like docker run -it <container-id>
Also why are we always firing these commands when we are running Origami:python manage.py runserver --noworker
python manage.py runworker
Is one for django channels and other for general server?
Also it would have been nice if we could fire 3 terminals at same time with some bash script / other configuration(not docker) ?
Hi @/all,
We’re very happy to announce that CloudCV has been selected for GSoC ’19! This is our fifth consecutive year with GSoC, and we’re looking forward to your contributions and to an exciting summer ahead! For new students, you can get started using our GSoC Ideas page http://gsoc.cloudcv.org.
Feel free to reach out to the mentors of different projects and ask them questions if you have any. Also, please carefully go through the instructions on the official GSoC page of CloudCV mentioned below:
GSoC Website Page: https://summerofcode.withgoogle.com/organizations/5709446018236416/
Happy Coding! |
I am building a GUI that requires me to log on to a remote computer via ssh. I am using paramiko to do this.
What I want to achieve is that a log in window is showed when the application is launched. The user has to put in some credentials. If the login is successful, then display the main window of the application. If the login fails, then remain at the login window.
If login succeeds, I want the ssh_client object to be passed on to the MainWindow class, so that the established connection can be used to perform tasks on the remote computer. However, how can I pass the ssh_client object to MainWindow?
The following code runs, but makes no attempt to use the established ssh_client. What could I do to be able to use the ssh_client from Login in MainWindow?
Perhaps I should just reestablish the connection in MainWindow - bu then I need to pass the credentials to MainWindow, which seems like the same kind of problem I am having right now.
import Tkinter as tk
import paramiko
import time
class Application(tk.Tk):
def __init__(self, *args, **kwargs):
tk.Tk.__init__(self, *args, **kwargs)
container = tk.Frame(self)
container.grid(row=0, column=0, sticky="nsew")
container.grid_rowconfigure(0, weight=1)
container.grid_columnconfigure(0, weight=1)
self.frames = {}
for F in (Login, MainWindow):
frame = F(container, self)
self.frames[F] = frame
frame.grid(row=0, column=0, sticky="nsew")
self.show_frame(Login)
def show_frame(self, cont):
frame = self.frames[cont]
frame.tkraise()
class Login(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent)
self.ssh_client = paramiko.SSHClient()
self.ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
self.parent = parent
self.controller = controller
self.grid(row=0, column=0)
self.user = tk.StringVar()
self.user.set("my_username") # Default user
self.host_options = ["host1", "host2"]
self.host = tk.StringVar()
self.host.set(self.host_options[0]) # Default hostname
l_user = tk.Label(self, text="Username: ")
l_user.grid(row=0, column=0, sticky=tk.E)
self.entry_user = tk.Entry(self)
self.entry_user.grid(row=0, column=1, sticky=tk.W)
self.entry_user.insert(0, self.user.get())
l_pwd = tk.Label(self, text="Password: ")
l_pwd.grid(row=1, column=0, sticky=tk.E)
self.entry_pwd = tk.Entry(self, show="*")
self.entry_pwd.grid(row=1, column=1, sticky=tk.W)
l_host = tk.Label(self, text="Hostname: ")
l_host.grid(row=2, column=0, sticky=tk.E)
optionmenu_host = tk.OptionMenu(self, self.host, *self.host_options)
optionmenu_host.grid(row=2, column=1, sticky=tk.W)
b_login = tk.Button(self, text="Log in", command=self.authorize)
b_login.grid(row=3, column=0, sticky=tk.W)
b_quit = tk.Button(self, text="Quit", command=self.parent.destroy)
b_quit.grid(row=4, column=0, sticky=tk.W)
def authorize(self):
try:
self.ssh_client.connect(hostname=self.host.get(), username=self.entry_user.get(), password=self.entry_pwd.get())
self.controller.show_frame(MainWindow)
except paramiko.AuthenticationException:
l_error = tk.Label(self, text="Login failed...", fg="red")
l_error.grid(row=4, column=1, sticky=tk.W)
l_error.after(2000, l_error.destroy)
class MainWindow(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent)
self.grid(row=0, column=0)
l = tk.Label(self, text="Log in was successful!")
l.grid(row=0, column=0, sticky=tk.W)
###################################
# run application
if __name__ == "__main__":
app = Application()
app.mainloop()
###################################
|
10 Nov 2018
Salesforce reporting introduces some fascinating complexities to data visibility and exposure, particularly for organizations using Private Organization-Wide Defaults.
The key complicating factor is this: when a Salesforce report is run, it’s run in the context of some user or another, and the records that are shown on the report are the ones that are visible to that user. This means that report distribution solutions have to be very careful to only show each user a report run in their own context - not someone else’s.
Suppose your organization builds a critical report that many users will need to review. It’s built to show “My Opportunities”, so each user will see only their own Opportunities, and the Opportunity Organization-Wide Default is Private. You add a criterion to the report to only show Opportunities that have your internal “Needs Attention” Checkbox set. Now: how do you make sure your users are regularly updated when they have Opportunities that require their review?
A naive solution would create one subscription to this report, say for Frank Q. Exec, and add all of the users who need to receive it as recipients:
But this runs afoul of the principle mentioned above: the report’s context user is Frank, and the recipients of the report will see data as if they were Frank. From Salesforce:
IMPORTANT Recipients see emailed report data as the person running the report. Consider that they may see more or less data than they normally see in Salesforce.
This is unlikely to be an acceptable outcome.
Further, we can’t simply have Frank create many subscriptions to the same report, adding one user as both the recipient and the running user to each: Frank only gets five total report subscriptions, and he can only have one subscription to each report.
Of course, users can schedule reports themselves, in their own context, and they can run them manually, and we can build dynamic dashboards (which come with their own limits). But what if we really need to create these subscriptions for our users automatically, or allow our admins to manage them for thousands of users at a time? What if, in fact, we want to offer the users a bespoke user interface to let them select subscriptions to standard corporate reports, or run reports in their contexts to feed into an external reporting or business intelligence solution?
This is a question I’ve struggled with before, and I was excited to see Martin Borthiry propose the issue on Salesforce Stack Exchange. Here, I’d like to expand on the solution I sketched out in response to Martin’s question.
Background
There are two report subscription functionalities on Salesforce, and they work rather differently. Report subscriptions are summarized in the Salesforce documentation under Schedule and Subscribe to Reports.
On Classic, one can “Subscribe” to a report, and one can “Schedule Future Runs”. The nomenclature here is confusing: a Classic “Subscribe” asks Salesforce to notify us if the report’s results meet certain thresholds, but it’s not for regularly receiving copies of the report. We’re not going to look at this feature. “Schedule Future Runs” is equivalent to a report subscription in Lightning and is the feature corresponding to the business problem discussed above.
On Lightning, we simply have an option to Subscribe, as we saw above. There’s no Lightning equivalent to the Classic “Subscribe” feature.
So what happens when we subscribe to a report?
The Classic Schedule Future Runs and the Lightning Subscribe functionality is represented under the hood as CronTrigger and CronJobDetail records with the CronJobDetail.JobType field set to 'A', for Analytics Notification. You can find them in queries from the Developer Console or Workbench via queries like
SELECT CronExpression, OwnerId, CronJobDetail.Name FROM CronTrigger WHERE CronJobDetail.JobType = 'A'
Unfortunately, knowing this doesn’t help us very much. Neither CronTrigger nor CronJobDetail can be created directly in Apex or via the API, and the objects provide very little detail about existing report subscriptions. The Report Id, for example, is notable by its absence, and the Name field is just a UUID.
A more promising avenue for our use case is the Reports and Dashboards API, because it offers an endpoint to create an Analytics Notification.
POST /services/data/vXX.0/analytics/notifications
with a JSON body like this
{
"active" : true,
"createdDate" : "",
"deactivateOnTrigger" : false,
"id" : "",
"lastModifiedDate" : "",
"name" : "New Notification",
"recordId" : "00OXXXXXXXXXXXXXXX",
"schedule" : {
"details" : {
"time" : 3
},
"frequency" : "daily"
},
"source" : "lightningReportSubscribe",
"thresholds" : [ {
"actions" : [ {
"configuration" : {
"recipients" : [ ]
},
"type" : "sendEmail"
} ],
"conditions" : null,
"type" : "always"
} ]
}
The feature set shown here in JSON is at parity with the user interface, and has the same limitations. Adding a recipient for the subscription over the API, for example, suffers from the same visibility flaws as doing so in the UI. And the API doesn’t let us do what we truly want to - create report subscriptions for other users that run as those other users - because we cannot set the owner of the subscription programmatically.
… or can we?
While the Reporting and Analytics API doesn’t support setting the context user for a subscription, it always takes action as the user as whom we authenticate to the API. And that we can control.
While an admin can Login As a user to create a one-off subscription, we’re more interested here in industrial-strength solutions that can support thousands of users. So we’re going to build a script to create subscriptions by talking to the Reports and Dashboards API, using the Javascript Web Token (JWT) OAuth authentication mechanism. Why? Because the JWT flow is our only route to seamlessly authenticating as any (admin-approved) user, with no manual intervention or setup required on a per-user basis.
Setup: Connected Apps and Certificates
Setting up the JWT flow involves building a Connected App in Salesforce, under which our scripts will authenticate. JWT is secured using a certificate and associated public key/private key pair - Salesforce holds the public key, our script holds the private key.
This is the same mechanism used for authentication in many Continuous Integration solutions. I’m not going to rehash all of the details here, because they’re well-covered elsewhere. You can follow Salesforce’s steps in using SFDX for continuous integration, or read through my own article about setting up CircleCI with Salesforce DX.
When you’re finished building the Connected App, add the Profiles of each of the users who are to be subscribed to reports to the Connected App as a pre-approved Profile, or assign all of those users a Permission Set and assign that Permission Set as pre-approved on the Connected App. This ensures that we can authenticate to the API as those users without any intervention.
Building the Scripts
We’re going to stick to sketching out a solution here that can be adapted to many different business problems, as we discussed earlier. For simplicity, we’ll use Salesforce DX to handle the JWT authentication, even though we’re not using SFDX for development here. Because it’s my preferred scripting workflow, I’ll be using Python with simple_salesforce, but you could just as easily achieve this in Ruby, Java, JavaScript, or even just bash and curl.
The main job of our script is to login as a user and create a report subscription for them. We might build this towards a specific business process by adding scaffolding to, for example, query a custom object out of Salesforce to define which reports should be subscribed automatically for which users, but we’ll leave that elaboration to a later date. Once we’ve got that core functionality achieved, we can wrap it in the logic we need for specific applications.
Let’s put the key field (private key) from our JWT setup in a file called server.key. Put the username of the user we want to subscribe (who must be pre-authorized to the Connected App) in the environment variable $USERNAME and the Connected App’s Consumer Key in $CONSUMERKEY.
Then we can get an Access Token to make an API call into Salesforce, letting SFDX do the heavy lifting:
sfdx force:auth:jwt:grant --clientid $CONSUMERKEY --jwtkeyfile server.key --username $USERNAME -a reports-test
export INSTANCE_URL=$(sfdx force:org:display --json -u reports-test | python -c "import json; import sys; print(json.load(sys.stdin)['result']['instanceUrl'])")
export ACCESS_TOKEN=$(sfdx force:org:display --json -u reports-test | python -c "import json; import sys; print(json.load(sys.stdin)['result']['accessToken'])")
(If you have jq installed, you can simplify these one-liners).
Now we’ve established an authenticated session as $USERNAME, even though we do not have that user’s credentials or any setup for that user besides preauthorizing their profile on the Connected App, and we have the values we need (the Access Token and Instance URL) stored in our environment.
Now we’ll switch over to Python. A quick script grabs those environment variables and uses simple_salesforce to make an API call to generate the report subscription.
import simple_salesforce
import os
import sys
outbound_json = """
{
"active" : true,
"createdDate" : "",
"deactivateOnTrigger" : false,
"id" : "",
"lastModifiedDate" : "",
"name" : "New Notification",
"recordId" : "%s",
"schedule" : {
"details" : {
"time" : 3
},
"frequency" : "daily"
},
"source" : "lightningReportSubscribe",
"thresholds" : [ {
"actions" : [ {
"configuration" : {
"recipients" : [ ]
},
"type" : "sendEmail"
} ],
"conditions" : null,
"type" : "always"
} ]
}"""
# Use an Access Token and Report Id to add a Lightning report subscription for this user
# such that the report will run as that user.
access_token = os.environ['ACCESS_TOKEN']
instance_url = os.environ['INSTANCE_URL']
report_id = sys.argv[1]
sf = simple_salesforce.Salesforce(session_id=access_token, instance_url=instance_url)
sf.restful(
'analytics/notifications',
None,
method='POST',
data=outbound_json % report_id
)
Execute the script
python add-subscription.py $REPORTID
where $REPORTID is the Salesforce Id of the report you wish to subscribe the user for, and then if we log in as that user in the UI, we’ll find a shiny new Lightning report subscription established for them.
Note that it’s set for daily at 0300, as specified in the example JSON.
Next Steps
We’ve got a proof-of-concept in place showing that we can in fact schedule results for users run as those users. In an article to follow soon, we’ll look at operationalizing this approach and building out business processes atop it.
22 Oct 2018
The <lightning:dataTable> component has built-in support for displaying links in table columns. The syntax looks something like this:
{
label: 'Case Number',
fieldName: 'My_URL_Field__c',
type: 'url',
typeAttributes: {
label: {
fieldName: 'CaseNumber'
}
},
sortable: true
}
typeAttributes.label.fieldName identifies a field on each row to utilize as the title of the link, while fieldName at the top level specifies the URL field itself.
In many cases, though, what we have in our sObject data isn’t a fully-qualified URL: it’s a Salesforce Id, a lookup to this record or to some other record, and we’d really like to display it sensibly as a link with an appropriate title. Unfortunately, <lightning:dataTable> doesn’t have an Id column type, and the url type is not clever enough to realize it’s been handed a record Id and handle it.
Instead, we need to generate the URL ourselves and add it as a property of the line items in the source data. (This is a bewildering shift for seasoned Apex programmers: we can just add fields to our sObjects?!) In the callback from the Apex server method querying our sObjects, we generate one or more synthetic properties:
cases.forEach(function(item) {
item['URL'] = '/lightning/r/Case/' + item['Id'] + '/view';
}
Our column entry will end up looking like this:
{
label: 'Case Number',
fieldName: 'URL',
type: 'url',
typeAttributes: {
label: {
fieldName: 'CaseNumber'
},
target: '_self'
},
sortable: true
}
Then, the result’s just what you might think:
The Case Number column is hyperlinked to open the related Case record.
Note that we’re using the Lightning form of the record URL (/lightning/r/<sObject>/<id>/view), and we’ve added the target: '_self' attribute to the typeAttributes map. This results in the link opening in the current tab, in non-Console applications, and loading a new Console tab in Sales or Service Console. The default behavior, if target is not specified, is to open a new browser tab, even in Console applications, which will often not be the desired behavior.
Using the Classic form of the record URL (/<id>) does work, but redirects through a translation URL. For future compatibility, it’s best to just use the current-generation URL format.
This process of synthesizing new fields for <lightning:dataTable> can be repeated arbitrarily, both for URL fields and for other types of calculated data, like icon columns. It’s important to remember, of course, that these synthetic properties cannot be persisted to the server because they’re not real sObject fields. Input to server actions must be constructed appropriately.
The JavaScript’s loosey-goosey type system and object model can be confusing for Apex programmers, but it offers a lot of freedom in return - and the ability to do things with sObjects we’d need a wrapper class to handle in Visualforce.
10 Oct 2018
The Salesforce documentation is notably terse in describing Considerations When Using GROUP BY. The guidance provided for determining which fields can be grouped is simply:
The Field object associated with DescribeSObjectResult has a groupable field that defines whether you can include the field in a GROUP BY clause.
This is a rather roundabout way to point to the method DescribeFieldResult.isGroupable(); for example, Account.Description.getDescribe().isGroupable() returns false.
Further, the document states that
You can’t use child relationship expressions that use the __r syntax in a query that uses a GROUP BY clause.
This gives us very little to go on, without examining the Describe information for every single field we might want to group on (doubly challenging in a Dynamic SOQL context). So which field types do, in fact, permit grouping? And does the final sentence prohibit the use of custom relationships in GROUP BY?
Failure to use a properly groupable field yields the error
field ‘FIELD_NAME__c’ can not be grouped in a query call
It turns out that groupability breaks down pretty cleanly along type lines, with a few interesting nuances. The underlying SOAP type appears to be the primary, but not the sole, determinant. Some fields within the same SOAP type differ in groupability based on other facets. Further, some formula fields can be used as groupings - but not the ones you might naively expect from other Salesforce Platform limitations!
Types are listed below by UI type, with the SOAP type in parentheses. This information was derived from inspection of numerous field describes via Workbench and the Tooling API.
Groupable Field Types
Checkbox (boolean)
Phone (string)
Picklist (string)
Email (string)
Text (string)
Text Area (string)
URL (string)
Number (int). Does not include custom fields, only standard Number fields with SOAP type int, like Account.NumberOfEmployees.
Lookup (id)
Id (id)
Date (date)
Direct cross-object references to groupable fields, up to 5 levels from the root object (SOQL limit), as in SELECT count(Id) FROM Contact GROUP BY Account.Parent.Parent.Parent.Parent.Name. Both custom and standard references are groupable.
Formulas of type Checkbox and Date, including cross-object formulas across standard and custom relationships.
Non-Groupable Field Types
Address Compound Fields
Components of Address compound fields are groupable if their types otherwise allow it.
Geolocations, both custom and standard, and whether or not defined as having decimal places, including the compound field and components (location/double)
Long Text (string)
Rich Text (string)
Auto Number (string)
Multi-Select Picklist (string)
Number (double), including custom Number fields with or without decimal and regardless of scale.
Percent (double), including custom Percent fields with or without decimal and regardless of scale.
Currency (double), including custom Currency fields with or without decimal and regardless of scale.
Roll-Up Summary Fields (double), including COUNT rollups.
Encrypted Text Fields (Classic Encryption; string)
Date/Time (dateTime)
Time (time)
Formulas of types other than Checkbox and Date, including the otherwise-groupable String type.
This post grew out of an interesting question on Salesforce Stack Exchange. I was intrigued by the lack of definition to this facet of SOQL and spent some time putting together a community wiki answer, which revealed that my original answer was mistaken: GROUP BY is stranger than I thought.
08 Oct 2018
Dreamforce ‘18 featured some truly outstanding sessions on the next generation of Salesforce technologies and development practices. Standouts included the excellent sessions showing how technologies like Platform Events, Change Data Capture, Unlocked Packages, and Force-DI lead to modular, loosely-coupled, and event driven Salesforce applications.
My session, Continuous Integration and Salesforce DX: Concepts and Connections, had over 350 registrants and yielded some excellent and important questions about moving to Salesforce DX practice. It was a great experience, and the talk is now available on YouTube.
I successfully completed the Salesforce Data Architecture and Management Designer certification, which means I’ve reached the Certified Application Architect level. This was my primary goal for 2018 and I’m thrilled to complete one half of the Architect pyramid. On to System Architect, and Dreamforce 2019! |
The getsizeof function from the sys module can be used to obtain the size of objects in Python. Comparing the size of elementary objects in Python with that in other languages can be quite interesting.
# Tried with Python 3.2.2 64-bit
import sys
a = None
sys.getsizeof( a ) # 16
a = 0
sys.getsizeof( a ) # 24
a = 12345678
sys.getsizeof( a ) # 28
a = ""
sys.getsizeof( a ) # 58
a = "hello"
sys.getsizeof( a ) # 68 (2 bytes per letter)
a = []
sys.getsizeof( a ) # 64
a = tuple()
sys.getsizeof( a ) # 48
a = set()
sys.getsizeof( a ) # 224
a = {}
sys.getsizeof( a ) # 272
|
The package has been documented mostly with Axion multi-well MEA data, but it can work with any data. Here we show how data can be read in using a generic text reader, with a simple data format. To read in the data, we need two data files, and we need to provide information about the layout of the plate.
require(meaRtools)
## Loading required package: meaRtools
show_top_file <- function(file) {
cat(readLines(file, 10), sep='\n')
}
The text reader requires two text files to be prepared: one with the spike times and one with the channel positions.
A CSV file is used to store at least two columns: Channel containing the channel name, and Time containing the time (in seconds) at which a spike was detected on that channel. The rows of the CSV do not need to be ordered by channel, but the times for each channel should be ordered, earliest spike first.
A second CSV contains the (x,y) spatial location each channel. If the recording is from a multi-well array, then an extra Well column determines the name of the well. (If no Well column is provided, the package assumes a plate with one well.)
The package by default has information about just two plates, the Axion 12 and 48 well systems. As shown in the examples below, in addition we need to provide information regarding the plate. If any information is missing, the system will revert to defaults.
Here there is just a single MEA recording, from a hexagonal MEA (Wong et al., 1993). As there is only one well, the Well information is absent from the position file. The data files are provided within the package, and the top of each file looks as follows:
times = system.file("extdata/textreader/wong1993_p0.times", package="meaRtools")
pos = system.file("extdata/textreader/wong1993_p0.pos", package="meaRtools")
show_top_file(times)
Channel,Timec1,14.51755c1,14.56795c1,78.75835c1,78.7723c1,78.7951c1,78.80975c1,78.83945c1,78.86245c1,78.90175
show_top_file(pos)
"Channel","x","y""c1",70,-242.48"c2",0,-242.48"c3",0,-242.48"c4",-140,-242.48"c5",175,-181.86"c6",175,-181.86"c7",105,-181.86"c8",-35,-181.86"c9",-35,-181.86
hex_platelayout = list(n_well = 1, #number of wells
wells = c("w1"), #names of those wells.
n_well_r = 1, # number of wells / row
n_well_c = 1, # number of wells / col
layout = c(1, 1), # layout when plotting
n_elec_r = 8,
n_elec_c = 8,
xlim = c(-400, 400), # xlimits for plotting
ylim = c(-400, 400), # ylimits for plotting
spacing = 50, # distance (um) separating electrodes
corr_breaks = 0 # vector of correlation distances
)
add_plateinfo("hex-1well", hex_platelayout)
## [1] "Axion 48 well" "Axion 12 well" "hex-1well"
s = read_spikelist_text(times, pos, array="hex-1well")
meaRtools:::.plot_mealayout(s$layout, use_names = TRUE, cex=0.3)
meaRtools:::.plot_meanfiringrate(s, main = "Mean Firing Rate by Plate (Hz)")
This second dataset is a composite of six recordings from P9 and P11 mouse retina (Demas et al, 2013), synthesised to make a 6-well plate.
demas_platelayout = list(n_well = 6,
wells = paste0("w", 1:6),
n_well_r = 2,
n_well_c = 3,
layout = c(3, 2),
n_elec_r = 8,
n_elec_c = 8,
xlim = c(-100, 7200),
ylim = c(0, 6000),
spacing = 200,
corr_breaks = 0
)
add_plateinfo("demas-6well", demas_platelayout)
## [1] "Axion 48 well" "Axion 12 well" "hex-1well"
## [4] "demas-6well"
times = system.file("extdata/textreader/demas.times", package="meaRtools")
pos = system.file("extdata/textreader/demas.pos", package="meaRtools")
s = read_spikelist_text(times, pos, array="demas-6well")
meaRtools:::.plot_mealayout(s$layout, use_names = TRUE, cex=0.3)
The test data come from the following two references:
Demas J, Eglen SJ, Wong ROL (2003) Developmental loss of synchronous spontaneous activity in the mouse retina is independent of visual experience. Journal of Neuroscience 23:2851–2860 Available at: https://www.ncbi.nlm.nih.gov/pubmed/12684472.
Wong RO, Meister M, Shatz CJ (1993) Transient period of correlated bursting activity during development of the mammalian retina. Neuron 11:923–938 http://dx.doi.org/10.1016/0896-6273(93)90122-8. |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
'''
file: kd_tree_kNN.py
author: xjump.me#at#gmail#dot#com
REF:
http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.KDTree.html
'''
import numpy as np
import operator
from scipy.spatial import cKDTree as KDTree
import sys
sys.setrecursionlimit(10000)
if __name__=="__main__":
v0 = np.array([1,2,3,4,5,6])
train_data_set = np.array([
[1.2,3,6,7,3,2],
[2,9,17,7,6,59],
[1.2,44,6,3,3,23],
[9,3,51,7,3,100],
[18,4,39,7,3,21],
[66,8,28,7,3,88],
[3,1,2,7,3,33],
[24,0.5,1,7,3,56],
[22,99,7,7,3,0.6],
[70,13,9,7,3,2],
])
tree = KDTree(train_data_set)
for k in range(1,10):
print 'k=',k
#test sample v0, return 3 nearest point, dimesion is k
print tree.query(v0,k=3,p=k)
|
We have if statements, we have else statements, we can also have elif statements.
Now you may be asking yourself, what the heck is an elif statement? It’s exactly what it sounds like, “else if”. An elif statement checks another condition after the previous if statements conditions aren’t met.
We can use elif statements to control the order we want our program to check each of our conditional statements. First, the if statement is checked, then each elif statement is checked from top to bottom, then finally the else code is executed if none of the previous conditions have been met.
Let’s take a look at this in practice. The following if statement will display a “thank you” message after someone donates to a charity; there will be a curated message based on how much was donated.
print("Thank you for the donation!")
if donation >= 1000:
print("You've achieved platinum status")
elif donation >= 500:
print("You've achieved gold donor status")
elif donation >= 100:
print("You've achieved silver donor status")
else:
print("You've achieved bronze donor status")
Take a second to think about this function. What would happen if all of the elif statements were simply if statements? If you donated $1100.00, then the first three messages would all print because each if condition had been met.
But because we used elif statements, it checks each condition sequentially and only prints one message. If I donate $600.00, the code first checks if that is over 1000, which it is not, then it checks if it’s over 500, which it is, so it prints that message, then because all of the other statements are elif and else, none of them get checked and no more messages get printed.
Try your hand at some other elif statements.
Instructions
1.
Calvin Coolidge’s Cool College has noticed that students prefer to get letter grades.
Write an if/elif/else statement that:
If gradeis 90 or higher, print"A"
Else if gradeis 80 or higher, print"B"
Else if gradeis 70 or higher, print"C"
Else if gradeis 60 or higher, print"D"
Else, print "F" |
Artikel ini ialah terjemahan "Machine Learning is Fun part 2" oleh Adam Geitgey. Sambungan dari terjemahan artikel sebelum ini.
Kalau terjemah secara langsung, tajuk artikel sepatutnya berbunyi "Seronoknya Pembelajaran Mesin". Tapi sebab SEO dan perkataan pembelajaran mesin tak digunakan secara meluas dalam bahasa Malaysia, saya ubah tajuk dan guna perkataan Kepintaran Buatan.
Saya teruskan usaha penterjemahan ini, sebab artikel dalam siri ini, sebab saya suka artikel siri ini. Mudah faham.
Saya sedaya upaya menterjemah sedekat mungkin dengan maksud penulis asal. Walaubagaimanapun, untuk orang ramai lebih mudah faham, saya akan mencelah dan mengubah struktur ayat sedikit sebanyak.
Ada masalah? Tak setuju dengan terjemahan saya? Ada pendapat? Komen di bawah atau tweet saya.
Dalam bahagian 1, kita menyatakan Pembelajaran Mesin (ML) ialah menggunakan algoritma umum untuk menyampaikan sessuatu yang menarik berkaitan dengan data, tanpa menulis sebarang kod tertentu (khusus) bagi menyelesaikan sesuatu masalah. (Kalau anda tak baca lagi bahagian 1, baca sekarang!)
Kali ini, kita akan menyaksikan salah satu daripada algoritma umum yang menghasilkan sesuatu yang hebat. Iaitu membina level dalam permainan video yang kelihatan seolah-olah hasil air tangan manusia. Kita akan membina satu rangkaian neural (Neural Network). Kita suap level dalam Super Mario yang sedia ada, kemudian satu level baru akan tercipta.
Sama seperti bahagian 1, tunjuk ajar ini ditujukan hanya kepada mereka yang bersungguh-sungguh nak belajar tentang pembelajaran mesin. Tetapi tak tahu bagaimana nak mula. Tujuan siri artikel ini, supaya boleh difahami oleh semua orang. Maksudnya, artikel ini akan menerangkan secara umum dan kita akan langkau banyak bahagian yang lebih mendalam. Tapi, siapa peduli? Seandainya, artikel ini dapat menarik hati seseorang menjadi semakin minat nak tahu tentang pembelajaran mesin, maka misi selesai.
Sebelum ini, artikel bahagian 1, kita buat satu algoritma mudah yang menjangka harga sesebuah rumah berdasarkan ciri-cirinya. Diberikan data tentang sebuah rumah seperti berikut:
Kita berakhir dengan fungsi jangkaan mudah seperti berikut:
def estimate_house_sales_price(num_of_bedrooms, sqft, neighborhood): price = 0 # a little pinch of this price += num_of_bedrooms * 0.123 # and a big pinch of that price += sqft * 0.41 # maybe a handful of this price += neighborhood * 0.57 return price
Dalam kata lain, kita teka harga sesebuah rumah dengan cara mendarab setiap ciri-cirinya dengan suatu pekali. Kemudian kita hanya tambah semua nombor-nombor tersebut untuk mendapatkan harga rumah.
Daripada kita koding, jom kita tunjukkan fungsi yang sama dalam bentuk gambarrajah mudah:
Walaubagaimanapun, algoritma ini hanya berfungsi untuk masalah mudah yang mana keputusannya berkait secara langsung dengan data yang diterima. Macammana kalau harga sebenar sebuah rumah tak semudah yang disangka? Contoh, mungkin faktor kejiranan sangat mempengaruhi harga, hanya untuk rumah-rumah yang bersaiz besar dan kecil, tetapi tidak kepada rumah-rumah saiz sederhana. Bagaimana kita nak tangkap isu yang agak kompleks ini dalam model kita?
Bagi menjadikan ianya lebih bijak, kita boleh jalankan algoritma ini banyak kali dengan pekali yang berbeza dan masing-masing dengan situasi yang berbeza:
Sekarang kita ada 4 jangkaan harga yang berbeza. Jom kita gabungkan keempat-empat jangkaan harga tadi kepada satu jangkaan akhir. Kita akan jalankan kesemuanya melalui satu algoritma yang sama sekali lagi (tapi berlainan kumpulan pekali)!
Jawapan Terbaik kita yang baru, menggabungkan jangkaan daripada empat cubaan berbeza kita dalam menyelesaikan masalah. Sebab itu, ia boleh membina menimbang tara lebih banyak situasi berbanding dengan apa yang kita boleh dapat dalam satu situasi mudah.
Mari gabungkan keempat-empat cubaan kita tadi dalam satu gambarajah besar:
Inilah rangkaian neural! Setiap nod tahu cara nak dapatkan kumpulan data, mengenakan pekali ke atasnya dan seterusnya mengira nilai hasil. Dengan merantaikan bersama kesemua nod-nod ini, kita boleh menghasilkan fungsi model yang kompleks.
Terdapat banyak perkara yang saya langkau dalam usaha memudahkan penerangan (termasuk ciri perskalaan, feature scaling dan fungsi pengaktifan, activation function), tetapi bahagian paling mustahak adalah idea-idea ini masuk:
Kita buat satu fungsi jangkaan mudah yang mengambil satu kumpulan data dan mendarabkan dengan pekali untuk mendapat satu output. Kita sebut fungsi mudah ini sebagai neuron
Dengan merantaikan pelbagai bersama, kita boleh menghasilkan fungsi yang sangat kompleks untuk satu neuronneuron
Macam LEGO! Kita tidak boleh menghasilkan model dengan satu blok LEGO, tapi kita boleh menghasilkan pelbagai jenis model jika kita ada banyak blok LEGO digabung bersama:
Rangkaian neural yang kita lihat akan sentiasa memberi jawapan yang sama apabila diberi maklumat data yang sama. Ia tiada memori. Ia tidak ingat perkara yang lepas. Dalam terma pengaturcaraan, disebut algoritma tanpa keadaan, stateless algorithm
Dalam kebanyakan kes (macam menjangka harga rumah), itulah yang anda nak. Tetapi, satu perkara yang model sebegini tidak boleh buat ialah, mengambil kira pola data berdasarkan masa.
Bayangkan saya beri anda papan kekunci dan minta anda tulis satu cerita (dalam bahasa Inggeris). Tetapi, sebelum anda mula, kerja saya meneka huruf pertama untuk perkataan yang anda akan taip. Huruf apa yang saya akan teka?
Saya boleh guna pengetahuan bahasa Inggeris untuk meneka dengan lebih tepat. Contoh, anda mungkin akan menggunakan perkataan yang selalu digunakan sebagai perkataan permulaan ayat. Jika saya tengok cerita-cerita yang anda pernah tulis, saya boleh kecilkan lagi skop kepada perkataan-perkataan yang anda selalu guna sebagai pemula ayat. Setelah saya dapat semua data itu, saya boleh gunakannya untuk buat satu rangkaian neural model mengenal pasti, betapa besar kemungkinan anda menggunakan huruf yang diberi sebagai pemula ayat.
Model kita mungkin akan kelihatan seperti berikut:
Tapi, jom kita jadikan masalah lagi susah. Katakan saya perlu meneka huruf seterusnya yang anda akan taip pada mana-mana tempat dalam cerita yang anda karang. Ini barulah masalah yang lagi mencabar.
Kita gunakan beberapa perkataan pertama dalam The Sun Also Rises oleh Ernest Hemingway sebagai contoh:
Robert Cohn was once middlewight boxi
Huruf apa yang akan muncul seterusnya?
Anda mungkin meneka huruf 'n', dan perkataannya mungkin boxing. Kita semua tahu tentang ini berdasarkan huruf-huruf yang kita pernah lihat dalam ayat dan pengetahuan asas bahasa Inggeris kita. Tambahan, perkataan 'middleweight' memberi klu tambahan, bahawa kita sedang bercerita tentang tinju (boxing).
Dalam kata lain, agak senang untuk meneka huruf seterusnya jika kita mengambil kira susunan huruf yang datang sebelumnya dan digabungkan dengan pengetahuan tentang bahasa Inggeris.
Bagi menyelesaikan masalah ini dengan rangkaian neural, kita hendaklah menambah state (keadaan) kepada model kita. Setiap kali kita tanya rangkaian neural untuk suatu jawapan, kita juga akan menyimpan kumpulan kiraan perantara (intermediate calculations) dan guna semula mereka sebagai sebahagian dari input untuk lain kali. Melalui cara sebegitu, model kita mengubah tekaan berdasarkan input terbaru yang diperolehinya.
Menyimpan senarai keadaan ke dalam model, bukan sahaja membolehkan kita meneka huruf pertama dalam cerita kita, malahan boleh meneka huruf seterusnya berdasarkan huruf-huruf sebelumnya.
Inilah asas kepada idea Rangkaian Neural Berulang (Recurrent Neural Network). Kita mengemaskini rangkaian setiap kali kita menggunanya. Ini membenar model dikemaskini tekaannya berdasarkan apa yang terkini dilihatnya. Ia juga boleh merangka pola mengikut masa selagi mana kita memberinya memori secukupnya.
Meneka huruf seterusnya dalam satu cerita mungkin kelihatan sangat tidak berguna. Jadi, kenapa membazir masa?
Salah satu daripada adaptasi ialah fungsi tekaan-automatik (auto-predict) untuk papan kekunci telefon pintar:
Tapi, apa kata kita ambil idea ini ke arah yang lebih GANAS? Apa kata, kita minta model ini untuk meneka huruf terbaik secara berterusan, selamanya? Kita telah meminta algoritma ini untuk mengarang satu cerita yang sempurna untuk kita. Mungkin, boleh buat novel kot? Sebuah novel yang dikarang oleh AI?
Kita telah nampak bagaimana kita boleh meneka huruf seterusnya dalam ayat Hemmingway. Jom kita cuba menjana satu cerita penuh dengan gaya Hemmingway.
Untuk tujuan ini, kita akan menggunakan implementasi Rangkaian Neural Berulang, Reccurrent Neural Network, RNN oleh Andrej Karpathy. Andrej merupakan pengkaji Pembelajaran-Mendalam (
Deep Learning) di Stanford dan telah menulis artikel yang terbaik memperkenalkan cara menjana teks menggunakan RNN. Anda boleh melihat kod untuk model ini di github.
Fakhrul mencelah. Saya pun nak cuba bermain dengan kod ini dan mungkin akan merakam screencast.
Kia akan membina model daripada teks The Sun Also Rises, 362,239 aksara menggunakan 84 unik aksara (termasuk tanda kata, huruf kecil, huruf besar dll). Kumpulan data ni, sebenarnya sangat lah kecil jika nak dibandingkan penggunaan aplikasi dunia sebenar. Bagi menghasilkan satu model yang sangat bagus untuk menulis gaya Hemmingway, adalah lebih baik jika kita gunakan berkali ganda banyak teks contoh. Tetapi, sekadar untuk bermain-main dengan algoritma, data sebanyak ini pun memadai.
Selepas kita mula melatih RNN, ia sangat tidak pandai untuk meneka huruf seterusnya. Ini contoh hasil selepas melalui 100 kali ulangan latihan:
hjCTCnhoofeoxelif edElobe negnk e iohehasenoldndAmdaI ayio pe e h’e btentmuhgehi bcgdltt. gey heho grpiahe.
Ddelnss.eelaishaner” cot AAfhB ht ltny
ehbih a”on bhnte ectrsnae abeahngy
amo k ns aeo?cdse nh a taei.rairrhelardr er deffijha
Anda boleh lihat ia faham sesuatu, contohnya kadang-kala di antara perkataan adanya senggang, tapi tu je.
Selepas 1000 lelaran (ulangan), keadaan menunjukkan hasil yang lebih baik:
hing soor ither. And the caraos, and the crowebel for figttier and ale the room of me? Streat was not to him Bill-stook of the momansbed mig out ust on the bull, out here. I been soms
inick stalling that aid.
“Hon’t me and acrained on .Hw’s don’t you for the roed,” In’s pair.”
“Alough marith him.”
Model ini mula mengenal pola untuk struktur asas sesuatu ayat. Ia akan meletakkan noktah pada setiap pengakhir ayat, malah menggunakan tanda pembuka dan penutup kata untuk teks cakapan. Beberapa perkataan boleh dikenal pasti, tetapi masih ada perkataan yang tak wujud.
Tetapi selepas beberapa ribu ulangan lagi, ia kelihatan agak baik:
He went over to the gate of the café. It was like a country bed.
“Do you know it’s been me.”
“Damned us,” Bill said.
“I was dangerous,” I said. “You were she did it and think I would a fine cape you,” I said.
“I can’t look strange in the cab.”
“You know I was this is though,” Brett said.
“It’s a fights no matter?”
“It makes to do it.”
“You make it?”
“Sit down,” I said. “I wish I wasn’t do a little with the man.”
“You found it.”
“I don’t know.”
“You see, I’m sorry of chatches,” Bill said. “You think it’s a friend off back and make you really drunk.”
Pada saat ini, algoritma telah berjaya menangkap corak akar gaya Hemmingway iaitu, dialog yang pendek dan terus. Beberapa ayat pun mula memberi maksud.
Bandingkan dengan teks sebenar dari dalam buku:
There were a few people inside at the bar, and outside, alone, sat Harvey Stone. He had a pile of saucers in front of him, and he needed a shave.
“Sit down,” said Harvey, “I’ve been looking for you.”
“What’s the matter?”
“Nothing. Just looking for you.”
“Been out to the races?”
“No. Not since Sunday.”
“What do you hear from the States?”
“Nothing. Absolutely nothing.”
“What’s the matter?”
Walaupun sekadar melihat pola untuk satu huruf pada satu masa, algoritma kita berjaya menghasilkan prosa yang munasabah dengan format yang betul. Hal ini sangat menarik!
Kita tidak perlu pun menjana teks dari kosong. Kita boleh benihkan algoritma ini dengan menyatakan beberapa huruf awal dan membiarkan model ini mencari huruf seterusnya.
Saja-saja nak seronok, jom kita buat satu kulit buku palsu untuk novel imiginasi kita. Kita akan menjana nama penulis dan tajuk buku rekaan dengan benih-benih "Er", "He" dan "The S":
Boleh tahan hebat!
Tapi perkara yang boleh buat 'terkejut beruk' ialah apabila algoritma ini boleh melihat pola untuk pelbagai jenis turutan data. Ia dengan senangnya boleh menjana resepi yang nampak betul-betul boleh jadi masakan atau ucapan palsu Obama. Tetapi kenapa hadkan diri kita dengan perkataan manusia? Kita boleh mengaplikasi idea yang sama untuk pelbagai jenis data berturutan yang ada pola.
Pada tahun 2015, Nintendo melepaskan Super Mario MakerTM untuk konsol permainan Wii U.
Permainan ini membenarkan anda untuk membina level Super Mario Brother anda sendiri dengan gamepad dan kemudian memuatnaiknya ke internet supaya kawan-kawan anda boleh bermain melaluinya. Anda boleh memasukkan semua power-ups klasik dan musuh-musuh dari permainan Mario original dalam level anda. Ini seolah-olah satu set LEGO maya untuk orang-orang yang membesar dengan bermain Super Mario Brothers.
Fakhrul mencelah:
Masa kecil, saya orang kampung dan bukan yang jenis main game konsol. Jadi, saya tak de rasa nostalgia tentang game ni.
Ingat-ingat lupa. Pernah main dengan gameboy kawan kot...
Boleh tak kita guna model sama yang menjana teks palsu gaya Hemmingway tadi untuk membina level Super Mario Brothers?
Mula-mula, kita perlukan data set untuk melatih model kita. Jom kita lihat semua level outdoor dari permainan Super Mario Brothers original yang dilepaskan pada tahun 1985:
Permainan ini ada 32 level dan kurang lebih 70% daripadanya mempunyai gaya outdoor yang sama. Jadi, itu sahaja yang akan jadi sumber kita.
Untuk mendapatkan reka bentuk bagi setiap level, saya mengambil satu salinan original permainan tersebut dan menulis satu program yang boleh menarik reka bentuk level daripada memori permainan. Super Mario Bros. merupakan sebuah permainan berusia 30 tahun dan terdapat banyak sumber di internet yang anda boleh cari bagaimana level disimpan di dalam permainan memori. Data level yang diekstrak daripada permainan lama ini juga satu proses pengaturcaraan yang seronok, mungkin anda pun perlu cuba.
Ini level pertama daripada permainan tersebut (mungkin anda pun ingat yang anda pernah main dulu):
Jika kita lihat lebih dekat, kita akan perasan yang level ini terbina daripada grid ringkas beberapa objek:
Kita boleh juga dengan senangnya mempersembahkan balik grid-grid ini dalam bentuk susunan aksara dengan setiap aksara mewakili suatu objek:
------------------------------------------------------------------------------#??#-----------------------------------------------------------------------------------------------------##------=--=----------==---------==--==--------===--------===--===------====-------====--====----=====-=========================-
Kita gantikan setiap objek dalam level ini dengan huruf:
- ruang kosong
= blok keras
# bata boleh pecah
? block koin
... dan seterusnya, menggunakan huruf berbeza untuk objek yang berbeza dalam level tersebut.
Saya akhirnya berjaya menghasilkan fail teks seperti berikut:
Melihat kepada fail teks, anda akan dapat lihat bahawa level dalam Mario tak lah ada banyak sangat pola jika anda lihat baris demi baris:
Pola untuk satu level sangat menyerlah apabila kita fikirkan setiap level secara menegak:
Jadi dalam proses untuk algoritma ini mencari pola dalam data kita, kita perlu masukkan data secara lajur demi lajur. Mencari cara paling efektif mempersembahkan data anda (disebut sebagai pemilihan ciri feature selection) merupakan salah satu dari kunci membina algoritma pembelajaran mesin yang baik.
Untuk melatih model ini, saya perlu pusing fail teks tadi secara 90 darjah. Ini memastikan setiap aksara dimasukkan ke dalam model melalui keadaan pola lebih mudah kelihatan:
-----------=-------#---=-------#---=-------?---=-------#---=-----------=-----------=[email protected]=[email protected]=-----------=-----------=-----------=---------PP=---------PP=----------==---------===--------====-------=====------======-----=======---=========---=========
Seperti yang kita lihat semasa membina model untuk prosa Hemmingway, model semakin baik bila dlatih dengan lebih banyak.
Setelah latihan yang sekejap, model kita akan hanya mampu menghasilkan sampah:
--------------------------LL+<&=------P------------------------------------------T--#--------=--=-=------------=-&--T------------------------------------=------$-=#-=-_--------------=----=<-----------b-
Ia seolah-olah dapat idea tentang simbol - dan = sepatutnya muncul dengan banyak kali, tetapi itu sahaja. Ia belum mampu melihat pola data lagi.
Setelah seribu lelaran latihan, ia mula kelihatan seperti:
-------------=----------=--------PP=--------PP=-----------=-----------=-----------=-------?---=-----------=-----------=
Model ini mula mengecam bahawa setiap baris seharusnya sama panjang. Ia juga mula dapat kenal pasti logik dalam permainan Mario: Paip dalam Mario selalunya 2 blok lebar dan sekurang-kurangnya 2 blok tinggi, jadi huruf P dalam data perlu muncul dalam keadaan 2x2. Boleh tahan!
Dengan lebih banyak latihan, model kita sampai ke tahap boleh menjana data yang sah sempurna:
--------PP=--------PP=----------=----------=----------=---PPP=---=---PPP=---=----------=
Jom lihat contoh penuh data untuk seluruh level dari model kita dan pusingkannya menjadi melintang balik:
Data ini nampak mantap! Terdapat beberapa yang hebat kalau anda perasan:
Model kita meletakkan Lakitu (raksasa yang terapung atas awan) atas langit pada mula permulaan level, sama macam dalam level Mario sebenar
Model kita tahu tentang paip yang terapung atas awan perlu terletak atas blok keras dan bukan sekadar terapung
Model kita meletakkan musuh di tempat yang rasional
Model kita tidak meletakkan sesuatu yang boleh menghalang pemain dari maju ke depan sampai akhir
Model kita memberikan rasaseperti bermain dari level Super Mario Bros sebenar. Sebab ia berdasarkan level gaya original yang sedia ada dalam permainan.
Akhir sekali, mari kita ambil level ini dan buat semula di Super Mario Maker:
Cuba main sendiri!
Model algoritma Rangkaian Neural Berulang yang kita latih di atas, agak serupa dengan algoritma yang digunakan oleh syarikat dalam dunia sebenar untuk menyelesaikan masalah speech detection dan terjemahan bahasa. Kita bina model kita sebagai "mainan" berbanding dengan cutting-edge sebab model kita dibina berdasarkan data yang sangat sikit. Masalahnya kita tidak mempunyai data level yang mencukupi dalam permainan orignal Super Mario Brothers untuk menyediakan data yang cukup bagi menghasilkan model yang baik.
Jika kita dapat akses kepada beratus ribu level Super Mario Maker yang dibuat oleh pengguna sebagaimana dipunyai oleh Nintendo, kita mungkin boleh menghasilkan model yang terbaik. Tetapi, sayangnya kita tidak, sebab Nintendo tidak akan serahkan kepada kita. Syarikat-syarikat besar tidak akan menyerahkan data mereka secara percuma.
Apabila pembelajaran mesin (ML) menjadi semakin penting kepada industri, beza antara perisian yang baik dan kurang bergantung kepada jumlah data yang digunakan untuk melatih model. Hal ini, menyebabkan syarikat seperti Google dan Facebook sangat berkehendakkan data bersungguh-sungguh.
Sebagai contoh, Google melepaskan TensorFlow sebagai sumber terbuka. TensorFlow ialah sebuah toolkit perisian untuk membina aplikasi pembelajaran mesin berskala besar. Hal ini merupakan perkara yang sangat besar kepada Google yang mampu memberi produk sehebat ini secara percuma. TensorFlow juga adalah toolkit yang digunakan oleh Google Translate.
Tapi, tanpa data yang berlambak dalam semua bahasa sebagaimana yang dipunyai oleh Google. anda tidak akan mampu menghasilkan pesaing kepada Google Translate. Jumlah data lah yang menjadikan Google berada pada tahap sekarang. Di lain masa, apabila anda buka sejarah lokasi Google Maps atau sejarah lokasi Facebook, cuba fikir. Anda akan perasan bahawa tempat-tempat yang pernah anda pergi disimpan.
Dalam pembelajaran mesin, tidak pernah ada satu cara sahaja untuk selesaikan masalah. Anda ada infiniti pilihan cara untuk melakukan pra-proses data anda dan algoritma yang ingin digunakan. Selalunya, menggabungkan beberapa pendekatan akan memberikan anda hasil yang lebih memuaskan berbanding dengan satu pendekatan.
Pembaca lain juga menghantarkan pautan-pautan menarik tentang pendekatan untuk menghasilkan level Super Mario:
Justin Michaud meluaskan pendekatan yang saya gunakan untuk menjana level dan menjumpai cara untuk menjana level untuk fail rom NES yang original (kod ditulis 30 tahun lepas)! Anda pun boleh bermain level yang digodamnya di rom yang digodam atas talian.
Kumpulan Amy K. Hoover's menggunakan pendekatan setiap jenis objek dalam level (paip, tanah, platfom, etc) sebagai satu suara dalam kesuluruhan simfoni. Menggunakan proses yang dipanggil functional scaffolding, sistem akan menambah level dengan blok untuk segala jenis objek yang diberikan. Contohnya, anda boleh melukis bentuk asas untuk setiap level, boleh menambah paip dan blok soalan untuk menyiapkan reka bentuk anda.
Kumpulan Steve Dahlskog mempamerkan model untuk data setiap lajur dalam level sebagai satu n-gram siri. Hal ini membolehkan proses jana level dengan algoritma yang lebih mudah berbanding Rangkaian Neural Berulang,RNNyang besar.
Fakhrullah mencelah:
Alhamdulillah berjaya jugak terjemah artikel bahagian 2 ni. Laman web Pusat Rujukan Persuratan Melayu pun dah berwajah baru, lagi senang nak guna.
Tahniah DBP!!, baru nampak berkerja. Cuma, kalau boleh usahakan juga https. Tapi, tak pasti dengan datanya, ada pertambahan baru atau tidak.
Cuma masalah sekarang ni, semua bacaan dan terjemahan ni sekadar teori. Tak tahu bila nak masuk pratikal. Nak kena cari masa buat, baru lagi faham.
Antara idea yang menarik nak saya cuba ialah, kira harga makanan dalam pinggan dengan ambil gambar sahaja dan sebab sayangkan bahasa, algoritma untu tukar ucapan bahasa Melayu ke teks,speech to text.
Macam biasa, kalau ada nampak salah, nak tambah, tinggalkan komen dibawah atau cuit saya dekat twitter @fajarhac. |
AddOn auf Fire TV 4K - Entpacker für gz fehlt
Hallo zusammen,
ich habe ein nerviges Problem auf meinem frisch-aufgesetzten Amazon-Fire TV 4k-Stick mit Kodi 18 nightly,
Wenn ich auf âDatenbank aktualisierenâ klicke, sagt mir Kodi , dass meine DB in ein paar Momenten aktualisiert wird. Mediathekview zieht daraufhin die âFilmliste-akt.gzâ, jedoch passiert nach Abschluss des Download nichts. Die DB wird weder initial erstellt, noch mit âDaten gefüttertâ. Das DB-Datum verbleibt auf 1970; Sie ist weiterhin leer, sodass nichts angezeigt wird.
Ein Blick ins Log sagt mir, dass Mediathekview anscheinend die .gz-Datei nicht entpacken kann:
22:17:27.714 T:18446744071768066336 NOTICE: [plugin.video.mediathekview-0.5.0:Updater]: Trying to decompress gz fileâ¦
22:17:43.174 T:18446744071768066336 ERROR: [plugin.video.mediathekview-0.5.0:Updater]: gz decompression failed: [Errno 22] Invalid argument
22:17:43.175 T:18446744071768066336 NOTICE: [plugin.video.mediathekview-0.5.0:Updater]: Return -1
22:17:43.175 T:18446744071768066336 NOTICE: [plugin.video.mediathekview-0.5.0:Updater]: Cleaning up downloadsâ¦
Ist das Problem bekannt? Wenn ja, wie kann ich es lösen? Sämtliche Entpacker sind aus dem Kodi- Repository installiertâ¦
Das Debug-Log findet ihr hier:
Mit Dank und GruÃ,
Krawei
@media_fread :
Laut Kodi-Entwickler KEIN Kodi, sondern ein AddOn-Problemâ¦wer hat nun recht? Oo
Siehe: https://github.com/xbmc/xbmc/issues/15039
Im github Ticket ist ein Hinweis das es wieder geht
https://github.com/mediathekview/plugin.video.mediathekview/issues/103
@media_fread : Es funktioniert weiterhin nicht (siehe meine Antwort in deinem o.g. Link). Könnte bitte einer der Entwickler meinen Fall prüfen?
Aktuelles debug log hier: https://paste.kodi.tv/vuwogelajo .
Die relevanten Zeilen sind (wie bereits bekannt):
14:52:23.253 T:18446744071794546976 NOTICE: [plugin.video.mediathekview-0.5.0:Updater]: Trying to download Filmliste-akt.gz from https://liste.mediathekview.de/Filmliste-akt.gzâ¦
14:52:56.008 T:18446744071794546976 NOTICE: [plugin.video.mediathekview-0.5.0:Updater]: Trying to decompress gz fileâ¦
14:53:07.195 T:18446744071794546976 ERROR: [plugin.video.mediathekview-0.5.0:Updater]: gz decompression failed: [Errno 22] Invalid argument
14:53:07.195 T:18446744071794546976 NOTICE: [plugin.video.mediathekview-0.5.0:Updater]: Return -1
14:53:07.196 T:18446744071794546976 NOTICE: [plugin.video.mediathekview-0.5.0:Updater]: Cleaning up downloadsâ¦
Amazon Fire TV 4k and Kodi 18 RC3: No gz/gzip installed/available
OK - das ist jetzt wirklich ein Ding: dass überhaupt GZ verwendet wird, deutet darauf hin, dass die Kodi-Version die auf dem Stick installiert wird, nicht über die Standard-Python-Library
bzverfügt, ansonsten hätte er einebz2Datei vorgezogen, da viel kleiner.gzipist sozusagen der letzte Fallback, aber sehr ungern von uns gesehen, da die Dateien in Vergleich recht groà sind.
Die
gzLibrary ist offensichtlich verfügbar, ansonsten hätte er mangels Entpacker gar nicht erst ein Update versucht. Was allerdings dieser ominöse Error 22 sein soll, ist mir zur Zeit nicht klar. Ich versuche jetzt erst mal die Sache mit gz auf andere Maschinen nachzustellen, da ich über kein Fire TV Stick verfügeâ¦
Der Code ist übrigens sehr unauffällig:
def _decompress_gz(self, sourcefile, destfile):
blocksize = 8192
try:
with open(destfile, 'wb') as dstfile, gzip.open(sourcefile) as srcfile:
for data in iter(lambda: srcfile.read(blocksize), b''):
dstfile.write(data)
# pylint: disable=broad-except
except Exception as err:
self.logger.error('gz decompression failed: {}'.format(err))
return -1
return 0
Was hingegen sehr auffällig ist, ist das Timing:
14:52:56.008 T:18446744071794546976 NOTICE: [plugin.video.mediathekview-0.5.0:Updater]: Trying to decompress gz file…
14:53:07.195 T:18446744071794546976 ERROR: [plugin.video.mediathekview-0.5.0:Updater]: gz decompression failed: [Errno 22] Invalid argument
Die Fehlermeldung kommt erst 12 Sekunden
nach demer angefangen hat auszupacken. Es ist also unter Umständen davon auszugehen dass der Fehler entweder beim Schreiben (Platte voll? Quota? immerhin ist eine ausgepackte Filmliste-akt 310 MB groÃ) auftritt oder die GZ Library auf dem FireTV nicht kompatibel mit dem GZ vom Server ist (was ich für vollkommen ausgeschlossen halte).
Was aber aus dem Code nicht sauber zu sehen ist, ist ob die Exception vom Lesen oder vom Schreiben kommt. Ich befürchte, dass ich hier mal eine Spezialversion für @media_fread bauen muss, um hier etwas bessere Debug Infos zu bekommen.
Was aber aus dem Code nicht sauber zu sehen ist, ist ob die Exception vom Lesen oder vom Schreiben kommt. Ich befürchte, dass ich hier mal eine Spezialversion für @media_fread bauen muss, um hier etwas bessere Debug Infos zu bekommen.
Bin nicht sicher wie ich mit der Debug Version helfen kann. Ich habe das Problem ja nicht mehr (denke es hat sich mit einem Kodi update erledigt.
OK - sorry! Hatte was durcheinander gebracht. @Krawei : Hast Du das Problem noch? Falls nein, können wir den Bug schlieÃen, ansonsten würde ich vorschlagen dass ich Dir mal eine Spezialversion mit zusätzlichen Debug-Ausgaben schicke.
@Krawei Heute oder morgen kommt ein Update aus dem Beta-Repository. In diesem Update habe ich die GZ-Routine mal komplett zerlegt und mit ganz viel Debugs ausgestattet. Wir kriegen dann mal raus, wo es knallt.
Hallo zusammen,
die Version 0.6.0 aus dem Beta-Repository enthält beim Entpacken von GZ Dateien nun zusätzliche Debug-Meldungen. Sollte dieses Problem noch aktuell sein, so bitte ich um Zusendung der Log-Dateien.
Vielen Dank!
Es wird gerade mit Hochdruck daran gearbeitet. Wer betroffen ist, sollte bitte ins Github Issue reinschauen um die neusten Informationen zu erhalten:
https://github.com/mediathekview/plugin.video.mediathekview/issues/103
Das Problem wurde mit Release 0.6.2 gelöst. |
Inspired by the ability to create scientific tech gadgets, I have two new toys on my desk these days: a Raspberry Pi and an Arduino. The simplicity of the Arduino is quite nice, but the ability to code the Raspberry Pi in Python (not to mention built-in Wi-Fi capability) drew me to do some experimenting with the infrared sensor. The problem I ran into almost immediately is that the out-of-the box solution to infrared remotes is lirc, which requires compilation and dedicating a pin specifically for the IR sensor for all time. I thought there must be a pure Python solution on the internets somewhere, but it appears nobody has tackled this one until now.
The wiring setup
The wiring setup for this project used the Sunfounder Raspberry Pi Sensor Kit, which has a few resistors built in. Based on some other posts involving an IR sensor and an Arduino, I think most IR sensors will function in a similar way. The IR Sensor has 3 pins: +3.3V, Signal, and Ground. The +3.3V pin gets attached to the +3.3V pin on the GPIO header, the Ground gets attached to one of the ground pins, and the Signal pin gets attached to one of the GPIO pins (in this example I use GPIO 18, or pin 11 on the header). The picture here is a bit fuzzy but my setup looked like this:
How IR Remotes Work
If you’re not going to use the out-of-the-box solution (lirc), you’re going to need to delve into the world of raw IR data transfer. Between an Arduino-related post and this YouTube Video, it appears that IR remotes are basically using Morse code to transfer information between the remote and the receiver, with 0 indicated by a short pulse, and 1 indicated by a long pulse. How long are these pulses? It appears they are between 0.5 ms and 2 ms based on the adafruit article, but is Python fast enough to measure this? It’s time to find out:
import RPi.GPIO as GPIO
from time import time
# Numbers GPIOs by physical location
GPIO.setmode(GPIO.BOARD)
# set pin 11 as an input pin with default as LOW v
GPIO.setup(11, GPIO.IN, pull_up_down=GPIO.PUD_DOWN)
# define a function to acquire data
def binary_acquire(pin, duration):
# acquires data as quickly as possible
t0 = time() # time is in seconds here
results = []
while (time() - t0) < duration:
results.append(GPIO.input(pin))
return results
print("Acquiring data for 1 second")
# acquire data for 1 second
results = binary_acquire(11, 1.0)
print("Done!")
print(",".join([str(result) for result in results])
GPIO.cleanup()
This bit of code will record as many possible values as it can from the signal output of the IR sensor, and print the results. These will be totally illegible, but if you put them into R, they look something like this (assuming you pressed a button on your remote during the 1 second interval you ran the script).
Lo and behold, we get a whole lot of long and short bursts, just as the YouTube Video predicted. My Pi is able to acquire (and store to memory) around 160,000 values in a second, which isn’t excellent but seems to do the trick for IR remotes anyway. If we zoom in to the section where the shorts and longs are, we can see the short/long difference a little more clearly.
So how long are these short/long pulses? My X-axis here is marked in samples, but since this may vary from Pi to Pi, it’s probably better to convert these into times like in the adafruit article. Instead of measuring the time of every sample, I’m going to use the overall sample rate (len(results)/duration) to convert run-lengths into durations.
rate = len(data) / 1.0 # because we acquired data for 1 second
pulses = []
i_break = 0
# detect run lengths using the acquisition rate to turn the times in to microseconds
for i in range(1, len(data)):
if (data[i] != data[i-1]) or (i == len(data)-1):
pulses.append((data[i-1], int((i-i_break)/rate*1e6)))
i_break = i
If we add this to our existing code and examine the results, it looks a little messy but we get a whole lot of long and short “1” values and pretty consistent “0” values. Based on the adafruit article and some data I played around with, short pulses are somewhere around 0.5 ms, and long pulses are somewhere around 1.2 ms. The variation in short/long pulses is probably because some combination of linux and Python are busy doing other things like running the OS or garbage collection and didn’t manage to read a few of the values that we would have liked. Either way, the short pulses are always less than 1 ms, and the long pulses are always greater than 1 ms, which we can use to translate our pulse durations into binary code.
# decode ( < 1 ms "1" pulse is a 1, > 1 ms "1" pulse is a 1, longer than 2 ms pulse is something else)
# does not decode channel, which may be a piece of the information after the long 1 pulse in the middle
outbin = ""
for val, us in pulses:
if val != 1:
continue
if outbin and us > 2000:
break
elif us < 1000:
outbin += "0"
elif 1000 < us < 2000:
outbin += "1"
print(outbin)
In this example I disregard any pulse greater than 2 ms (2000 microseconds) until some numbers have been read, and use the longer than 2 ms pulse after all the numbers to terminate reading the long/short values. This ensures all the long/short pulses are next to eachother. Put into practice, we can use a couple more RPi.GPIO tricks to listen for a change in the signal (wait_for_edge()) before recording our values.
import RPi.GPIO as GPIO
from time import time
def setup():
GPIO.setmode(GPIO.BOARD) # Numbers GPIOs by physical location
GPIO.setup(11, GPIO.IN, pull_up_down=GPIO.PUD_DOWN)
def binary_aquire(pin, duration):
# aquires data as quickly as possible
t0 = time()
results = []
while (time() - t0) < duration:
results.append(GPIO.input(pin))
return results
def on_ir_receive(pinNo, bouncetime=150):
# when edge detect is called (which requires less CPU than constant
# data acquisition), we acquire data as quickly as possible
data = binary_aquire(pinNo, bouncetime/1000.0)
if len(data) < bouncetime:
return
rate = len(data) / (bouncetime / 1000.0)
pulses = []
i_break = 0
# detect run lengths using the acquisition rate to turn the times in to microseconds
for i in range(1, len(data)):
if (data[i] != data[i-1]) or (i == len(data)-1):
pulses.append((data[i-1], int((i-i_break)/rate*1e6)))
i_break = i
# decode ( < 1 ms "1" pulse is a 1, > 1 ms "1" pulse is a 1, longer than 2 ms pulse is something else)
# does not decode channel, which may be a piece of the information after the long 1 pulse in the middle
outbin = ""
for val, us in pulses:
if val != 1:
continue
if outbin and us > 2000:
break
elif us < 1000:
outbin += "0"
elif 1000 < us < 2000:
outbin += "1"
try:
return int(outbin, 2)
except ValueError:
# probably an empty code
return None
def destroy():
GPIO.cleanup()
if __name__ == "__main__":
setup()
try:
print("Starting IR Listener")
while True:
print("Waiting for signal")
GPIO.wait_for_edge(11, GPIO.FALLING)
code = on_ir_receive(11)
if code:
print(str(hex(code)))
else:
print("Invalid code")
except KeyboardInterrupt:
pass
except RuntimeError:
# this gets thrown when control C gets pressed
# because wait_for_edge doesn't properly pass this on
pass
print("Quitting")
destroy()
Note that here I use the terminology bouncetime, which I mean to be the amount of time for which we should record. Based on some experimenting, it looks like it’s usually around 150 ms. And there you go! You should get something like:
Starting IR Listener
Waiting for signal
0xffa25d
Waiting for signal
0xff629d
Waiting for signal
0xffe21d
^CQuitting
From some experimenting with the remote from the Sunfounder Kit, I built a dictionary of codes for the remote.
CODES = {
0xffa25d: "ON/OFF",
0xff629d: "MODE",
0xffe21d: "MUTE",
0xff22dd: "PLAY/PAUSE",
0xff02fd: "PREVIOUS",
0xffc23d: "NEXT",
0xffe01f: "EQ",
0xffa857: "MINUS",
0xff906f: "PLUS",
0xff6897: "0",
0xff9867: "SHUFFLE",
0xffb04f: "U/SD",
0xff30cf: "1",
0xff18e7: "2",
0xff7a85: "3",
0xff10ef: "4",
0xff38c7: "5",
0xff5aa5: "6",
0xff42bd: "7",
0xff4ab5: "8",
0xff52ad: "9",
}
Probably the most useful implementation of this would be to put it in a Thread of some type and listen in the background since the wait_for_edge() function blocks until it something changes on the pin that it monitors. Now to build a remote-control coffee machine for those early mornings… |
Good and Bad Practices of Coding in Python
Table of contents:
Python is a high-level multi-paradigm programming language that emphasizes readability. It’s being developed, maintained, and often used following the rules called The Zen of Python or PEP 20.
This article shows several examples of good and bad practices of coding in Python that you’re likely to meet often.
Using Unpacking to Write Concise Code
Packing and unpacking are powerful Python features. You can use unpacking to assign values to your variables:
>>> a, b = 2, 'my-string' >>> a 2 >>> b 'my-string'
You can exploit this behavior to implement probably the most concise and elegant variables swap in the entire world of computer programming:
>>> a, b = b, a >>> a 'my-string' >>> b 2
That’s awesome! Unpacking can be used for the assignment to multiple variables in more complex cases. For example, you can assign like this:
>>> x = (1, 2, 4, 8, 16) >>> a = x[0] >>> b = x[1] >>> c = x[2] >>> d = x[3] >>> e = x[4] >>> a, b, c, d, e (1, 2, 4, 8, 16)
But instead, you can use more concise and arguably more readable approach:
>>> a, b, c, d, e = x >>> a, b, c, d, e (1, 2, 4, 8, 16)
That’s cool, right? But it can be even cooler:
>>> a, *y, e = x >>> a, e, y (1, 16, [2, 4, 8])
The point is that the variable with * collects the values not assigned to others.
Using Chaining to Write Concise Code
Python allows you to chain the comparison operations. So, you don’t have to use and to check if two or more comparisons are True:
>>> x = 4 >>> x >= 2 and x <= 8 True
Instead, you can write this in a more compact form, like mathematicians do:
>>> 2 <= x <= 8 True >>> 2 <= x <= 3 False
Python also supports chained assignments. So, if you want to assign the same value to multiple variables, you can do it in a straightforward way:
>>> x = 2 >>> y = 2 >>> z = 2
A more elegant way is to use unpacking:
>>> x, y, z = 2, 2, 2
However, things become even better with chained assignments:
>>> x = y = z = 2 >>> x, y, z (2, 2, 2)
Be careful when your value is mutable! All the variables refer to the same instance.
Checking against None
None is a special and unique object in Python. It has a similar purpose, like null in C-like languages.
It’s possible to check whether a variable refers to it with the comparison operators == and !=:
>>> x, y = 2, None >>> x == None False >>> y == None True >>> x != None True >>> y != None False
However, a more Pythonic and desirable way is using is and is not:
>>> x is None False >>> y is None True >>> x is not None True >>> y is not None False
In addition, you should prefer using the is not construct x is not None over its less readable alternative not (x is None).
Iterating over Sequences and Mappings
You can implement iterations and for loops in Python in several ways. Python offers some built-in classes to facilitate it.
In almost all cases, you can use the range to get an iterator that yields integers:
>>> x = [1, 2, 4, 8, 16] >>> for i in range(len(x)): ... print(x[i]) ... 1 2 4 8 16
However, there’s a better way to iterate over a sequence:
>>> for item in x: ... print(item) ... 1 2 4 8 16
But what if you want to iterate in the reversed order? Of course, the range is an option again:
>>> for i in range(len(x)-1, -1, -1): ... print(x[i]) ... 16 8 4 2 1
Reversing the sequence is a more elegant way:
>>> for item in x[::-1]: ... print(item) ... 16 8 4 2 1
The Pythonic way is to use reversed to get an iterator that yields the items of a sequence in the reversed order:
>>> for item in reversed(x): ... print(item) ... 16 8 4 2 1
Sometimes you need both the items from a sequence and the corresponding indices:
>>> for i in range(len(x)): ... print(i, x[i]) ... 0 1 1 2 2 4 3 8 4 16
It’s better to use enumerate to get another iterator that yields the tuples with the indices and items:
>>> for i, item in enumerate(x): ... print(i, item) ... 0 1 1 2 2 4 3 8 4 16
That’s cool. But what if you want to iterate over two or more sequences? Of course, you can use the range again:
>>> y = 'abcde' >>> for i in range(len(x)): ... print(x[i], y[i]) ... 1 a 2 b 4 c 8 d 16 e
In this case, Python also offers a better solution. You can apply zip and get tuples of the corresponding items:
>>> for item in zip(x, y): ... print(item) ... (1, 'a') (2, 'b') (4, 'c') (8, 'd') (16, 'e')
You can combine it with unpacking:
>>> for x_item, y_item in zip(x, y): ... print(x_item, y_item) ... 1 a 2 b 4 c 8 d 16 e
Please, have in mind that range can be very useful. However, there are cases (like those shown above) where there are more convenient alternatives. Iterating over a dictionary yields its keys:
>>> z = {'a': 0, 'b': 1} >>> for k in z: ... print(k, z[k]) ... a 0 b 1
However, you can apply the method .items() and get the tuples with the keys and the corresponding values:
>>> for k, v in z.items(): ... print(k, v) ... a 0 b 1
You can also use the methods .keys() and .values() to iterate over the keys and values, respectively.
Comparing to Zero
When you have numeric data, and you need to check if the numbers are equal to zero, you can but don’t have to use the comparison operators == and !=:
>>> x = (1, 2, 0, 3, 0, 4) >>> for item in x: ... if item != 0: ... print(item) ... 1 2 3 4
The Pythonic way is to exploit the fact that zero is interpreted as False in a Boolean context, while all other numbers are considered as True:
>>> bool(0) False >>> bool(-1), bool(1), bool(20), bool(28.4) (True, True, True, True)
Having this in mind you can just use if item instead of if item != 0:
>>> for item in x: ... if item: ... print(item) ... 1 2 3 4
You can follow the same logic and use if not item instead of if item == 0.
Avoiding Mutable Optional Arguments
Python has a very flexible system of providing arguments to functions and methods. Optional arguments are a part of this offer. But be careful: you usually don’t want to use mutable optional arguments. Consider the following example:
>>> def f(value, seq=[]): ... seq.append(value) ... return seq
At first sight, it looks like that, if you don’t provide seq, f() appends a value to an empty list and returns something like [value]:
>>> f(value=2) [2]
Looks fine, right? No! Consider the following examples:
>>> f(value=4) [2, 4] >>> f(value=8) [2, 4, 8] >>> f(value=16) [2, 4, 8, 16]
Surprised? Confused? If you are, you’re not the only one. It seems that the same instance of an optional argument (list in this case) is provided every time the function is called. Maybe sometimes you’ll want just what the code above does. However, it’s much more likely that you’ll need to avoid that. You can keep away from that with some additional logic. One of the ways is this:
>>> def f(value, seq=None): ... if seq is None: ... seq = [] ... seq.append(value) ... return seq
A shorter version is:
>>> def f(value, seq=None): ... if not seq: ... seq = [] ... seq.append(value) ... return seq
Now, you get different behavior:
>>> f(value=2) [2] >>> f(value=4) [4] >>> f(value=8) [8] >>> f(value=16) [16]
In most cases, that’s what one wants.
Avoiding Classical Getters and Setters
Python allows defining getter and setter methods similarly as C++ and Java:
>>> class C: ... def get_x(self): ... return self.__x ... def set_x(self, value): ... self.__x = value
This is how you can use them to get and set the state of an object:
>>> c = C() >>> c.set_x(2) >>> c.get_x() 2
In some cases, this is the best way to get the job done. However, it’s often more elegant to define and use properties, especially in simple cases:
>>> class C: ... @property ... def x(self): ... return self.__x ... @x.setter ... def x(self, value): ... self.__x = value
Properties are considered more Pythonic than classical getters and setters. You can use them similarly as in C#, i.e. the same way as ordinary data attributes:
>>> c = C() >>> c.x = 2 >>> c.x 2
So, in general, it’s a good practice to use properties when you can and C++-like getters and setters when you have to.
Avoiding Accessing Protected Class Members
Python doesn’t have real private class members. However, there’s a convention that says that you shouldn’t access or modify the members beginning with the underscore (_) outside their instances. They are not guaranteed to preserve the existing behavior.
For example, consider the code:
>>> class C: ... def __init__(self, *args): ... self.x, self._y, self.__z = args ... >>> c = C(1, 2, 4)
The instances of class C have three data members: .x, ._y, and ._Cz. If a member’s name begins with a double underscore (dunder), it becomes mangled, that is modified. That’s why you have ._Cz instead of .__z.Now, it’s quite OK to access or modify .x directly:
>>> c.x # OK1
You can also access or modify ._y from outside its instance, but it’s considered a bad practice:
>>> c._y # Possible, but a bad practice!2
You can’t access .z because it’s mangled, but you can access or modify ._Cz:
>>> c.__z # Error! Traceback (most recent call last): File "", line 1, in AttributeError: 'C' object has no attribute '__z' >>> c._C__z # Possible, but even worse! 4 >>>
You should avoid doing this. The author of the class probably begins the names with the underscore(s) to tell you, “don’t use it”.
Using Context Managers to Release Resources
Sometimes it’s required to write the code to manage resources properly. It’s often the case when working with files, database connections, or other entities with unmanaged resources. For example, you can open a file and process it:
>>> my_file = open('filename.csv', 'w') >>> # do something with `my_file`
To properly manage the memory, you need to close this file after finishing the job:
>>> my_file = open('filename.csv', 'w') >>> # do something with `my_file and` >>> my_file.close()
Doing it this way is better than not doing it at all. But, what if an exception occurs while processing your file? Then my_file.close() is never executed. You can handle this with exception-handling syntax or with context managers. The second way means that you put your code inside the with a block:
>>> with open('filename.csv', 'w') as my_file: ... # do something with `my_file`
Using the with block means that the special methods .enter() and .exit() are called, even in the cases of exceptions. These methods should take care of the resources.You can achieve especially robust constructs by combining the context managers and exception handling.
Stylistic Advises
Python code should be elegant, concise, and readable. It should be beautiful.
The ultimate resource on how to write beautiful Python code is Style Guide for Python Code or PEP 8. You should definitely read it if you want to code in Python.
Conclusions
This article gives several advises on how to write a more efficient, more readable, and more concise code. In short, it shows how to write a Pythonic code. In addition, PEP 8 provides the style guide for Python code, and PEP 20 represents the principles of Python language.
Enjoy writing Pythonic, useful, and beautiful code!
Thank you for reading.
The article was prepared by our teammate Mirko. |
譁ュ髱「莠梧ャ。繝「繝シ繝。繝ウ繝�
繧�繧阪≧縺ィ縺励※縺�繧倶コ�
險育ョ励�ョ莉墓婿縺ッ莠後▽
譁ュ髱「莠梧ャ。繝「繝シ繝。繝ウ繝医�ョ蜈ャ蠑�
謨ー蛟、險育ョ励〒譁ュ髱「莠梧ャ。繝「繝シ繝。繝ウ繝医r豎ゅa繧九◆繧√�ョ縲∬ィ育ョ玲焔鬆�
譁ュ髱「莠梧ャ。繝「繝シ繝。繝ウ繝医∪縺ァ縺ョ驕薙�ョ繧�
1. 蠎ァ讓吝�、繧呈焔縺ォ蜈・繧後k 蝗幄ァ貞ス「縺ョx,y蠎ァ讓吶r蠕励k髢「謨ー
def coorsquare(r=4, center=[0,0], n=10, closed=True):
rx1=-r/2.0+center[0]
rx2=r/2.0+center[0]
ry1=-r/2.0+center[1]
ry2=r/2.0+center[1]
x = [rx1 + i*(rx2-rx1)/float(n) for i in range(n)]
y = [ry2 for i in range(n)]
#1
x = x + [rx2 for i in range(n)]
y = y + [ry2 - i*(ry2-ry1)/float(n) for i in range(n)]
#2
x = x + [rx2 - i*(rx2-rx1)/float(n) for i in range(n)]
y = y + [ry1 for i in range(n)]
#3
x = x + [rx1 for i in range(n)]
y = y + [ry1 + i*(ry2-ry1)/float(n) for i in range(n)]
#4
if closed:
x = x + [rx1]
y = y + [ry2]
return x, y
x,y = coorsquare(r=5, center=[0,0], n=8, closed=True)
荳翫�ョ髢「謨ー繧剃スソ縺」縺ヲ菴懊▲縺毋,y繧剃ス懷峙縲�
import matplotlib.pyplot as plt
fig,ax1 = plt.subplots(1)
ax1.plot(x,y)
ax1.scatter(x,y)
for i in range(len(x)):
ax1.text(x[i],y[i],"%s"%(i))
ax1.set_xlim(-4,4)
ax1.set_ylim(-4,4)
ax1.set_aspect("equal")
plt.show()
2. 髱「遨阪r險育ョ冷�呈妙髱「荳�谺。繝「繝シ繝。繝ウ繝遺�貞峙蠢�繧定ィ育ョ励��
def area(x, y):
A = sum([(y[i]+y[i+1])/2.0*(x[i+1]-x[i]) for i in range(len(x)-1)])
return A
x,y = coorsquare(r=5, center=[0,0], n=20, closed=True)
print(area(x,y))
def FirstInrt(x, y):
FIy = sum([(y[i]+y[i+1])**2/8.0*(x[i+1]-x[i]) for i in range(len(x)-1)])
FIx = sum([(x[i]+x[i+1])**2/8.0*(y[i]-y[i+1]) for i in range(len(x)-1)])
return FIx, FIy
print(FirstInrt(x, y))
def CntCS(x, y):
A = area(x, y)
FIx, FIy = FirstInrt(x, y)
Cntx, Cnty = FIx/A, FIy/A
return Cntx, Cnty
print("center(%s,%s)"%(CntCS(x, y)))
3. 譁ュ髱「莠梧ャ。繝「繝シ繝。繝ウ繝医�ョ險育ョ暦シ医す繝ウ繝励た繝ウ蜑��シ�
def simpson(x,y):#sekibun_1/3simpson(3point)
Cntx, Cnty = CntCS(x, y)
x = np.array(x)-Cntx
y = np.array(y)-Cnty
SecondInrty,SecondInrtx = 0,0
x1 = np.array(x)**3/3.0
y1 = np.array(y)**3/3.0
for i in range(len(x))[::2][:-1]:
a1 = float(x[i+1]-x[i])
# if は、0割対策。
if a1 !=0.0:
a1 = y1[i] * (x[i+2]-x[i])*(-x[i+2]-2.0*x[i]+3.0*x[i+1])/a1/6.0
a2 = float((x[i+1]-x[i])*(x[i+2]-x[i+1]))
if a2 !=0.0:
a2 = y1[i+1] * (x[i+2]-x[i])**3/a2/6.0
a3 = float(x[i+2]-x[i+1])
if a3 !=0.0:
a3 = y1[i+2] * (x[i+2]-x[i])*(2.0*x[i+2]+x[i]-3.0*x[i+1])/a3/6.0
SecondInrty = SecondInrty + a1 + a2 + a3
b1 = y[i+1]-y[i]
if b1 !=0:
b1 = x1[i]*(y[i+2]-y[i])*(-y[i+2]-2*y[i]+3*y[i+1])/b1/6.0
b2 = (y[i+1]-y[i])*(y[i+2]-y[i+1])
if b2 !=0:
b2 = x1[i+1]*(y[i+2]-y[i])**3/6.0/b2
b3 = y[i+2]-y[i+1]
if b3 !=0:
b3 = x1[i+2]*(y[i+2]-y[i])*(2*y[i+2]+y[i]-3*y[i+1])/b3/6.0
SecondInrtx = SecondInrtx - (b1 + b2 + b3)
return SecondInrtx, SecondInrty
# 断面が中心にない場合でも、図心を原点に持って行った時の値になるようにした。
x,y = coorsquare(r=5, center=[7,7], n=8, closed=True)
print("1/3simpson(3point)%s,%s"(simpson(x,y)))
# nは、偶数にしないと、正しく計算されない。
#うまくいかない。
x,y = coorsquare(r=5, center=[0,0], n=7, closed=True)
print("1/3simpson(3point)%s,%s"%(simpson(x,y)))
# 全体で偶数になっていても、一辺が奇数ではだめ。
#うまくいかない
x,y = coorsquare(r=5, center=[0,0], n=8, closed=True)
x.insert(-9,(-2.5-1.875)/2.0)
y.insert(-9,-2.5)
y.insert(-1,(2.5+1.875)/2.0)
x.insert(-1,-2.5)
print("1/3simpson(3point)%s,%s"%(simpson(x,y)))
# nが偶数なら、不等間隔でも計算される。
x,y = coorsquare(r=5, center=[0,0], n=8, closed=True)
x.insert(-9,(-2.5-1.875)/2.0)
y.insert(-9,-2.5)
x.insert(-9,-2.4)
y.insert(-9,-2.5)
print("1/3simpson(3point)%s,%s"%(simpson(x,y)))
縺イ縺怜ス「(45蠎ヲ蝗櫁サ「縲ゆク�驛ィ荳咲ュ蛾俣髫斐��)
x,y = coorsquare(r=5, center=[0,0], n=8, closed=True)
x.insert(-9,(-2.5-1.875)/2.0)
y.insert(-9,-2.5)x.insert(-9,-2.4)
y.insert(-9,-2.5)
xy= np.dot(np.array([x,y]).T, [[np.cos(1/4.0*np.pi), np.sin(1/4.0*np.pi)],[-np.sin(1/4.0*np.pi), np.cos(1/4.0*np.pi)]]).T
x=xy[0]
y=xy[1]
print("1/3simpson(3point)%s,%s"%(simpson(x,y)))
遨エ縺ゅ″�シ域ュ」譁ケ蠖「�シ�
x1,y1 = coorsquare(r=5, center=[0,0], n=8, closed=True)
x2,y2 = coorsquare(r=3, center=[0,0], n=4, closed=True)
x2=x2[::-1]
y2=y2[::-1]
Ix1,Iy1 = simpson(x1,y1)
Ix2,Iy2 = simpson(x2,y2)
a = Ix1+Ix2
b = Iy1+Iy2
print("1/3simpson(3point)%s,%s"%(a,b))
import matplotlib.pyplot as plt
fig,ax1 = plt.subplots(1)
ax1.plot(x1,y1)
ax1.plot(x2,y2)
ax1.scatter(x1,y1)
ax1.scatter(x2,y2)
for i in range(len(x1)):
ax1.text(x1[i],y1[i],"%s"%(i))
if i<len(x2):
ax1.text(x2[i],y2[i],"%s"%(i))
ax1.set_xlim(-3,3)
ax1.set_ylim(-3,3)
ax1.set_aspect("equal")
plt.show()
蜀�蠖「
N=21
x = np.cos(np.linspace(0, 2.0*np.pi, N))
y = np.sin(np.linspace(0, 2.0*np.pi, N))
print("1/3simpson(3point)%s,%s"%(simpson(x,y)))
蜀�縺ョ逅�隲門�、
va = 0.25 * np.pi
Ix = []
Iy = []
Acx = []
Acy = []
Erx = []
Ery = []
for N in np.array(np.logspace(1,4,10),"int"):
#for N in np.array(np.logspace(1,4,10),"int")*4+1:
x = np.cos(np.linspace(0, 2.0*np.pi, N))
y = -np.sin(np.linspace(0, 2.0*np.pi, N))#Clockwise
I = simpson(x,y)
Ix.append(I[0])
Iy.append(I[1])
Acx.append(Ix[-1]/va)
Acy.append(Iy[-1]/va)
Erx.append(1 - Acx[-1])
Ery.append(1 - Acy[-1])
print("%s-points,¥tIx:%s,¥tIy:%s,¥tAcx:%s,¥tAcy:%s,¥tErx:%s,¥tEry:%s,¥t"%(N,Ix[-1],Iy[-1],Acx[-1],Acy[-1],Erx[-1],Ery[-1]))
import matplotlib.pyplot as plt
x = np.array(np.logspace(1,4,10),"int")#*4+1
fig,(ax1,ax2) = plt.subplots(2,1)
ax1.plot(x,[va for i in range(len(x))])
ax1.scatter(x, Acy)
ax2.plot(x,[1 for i in range(len(x))])
ax2.scatter(x, Ery)
ax1.set_ylim(0.99,1)
ax2.set_ylim(0.00000000000001, 0.1)
ax1.set_xscale("log")
ax2.set_xscale("log")
ax1.set_yscale("log")
ax2.set_yscale("log")
plt.show()
縲壬ython縺ョ譛�譁ー險倅コ九��
荳ュ騾溘ヵ繝シ繝ェ繧ィ螟画鋤 窶セ髮「謨」繝輔�シ繝ェ繧ィ螟画鋤繧医j..
譁ュ髱「莠梧ャ。繝「繝シ繝。繝ウ繝医r縲∝コァ讓咏せ縺ョ驟榊�励°繧芽ィ�..
font繝輔ぃ繧、繝ォ縺ョ譁�蟄励ョ繝シ繧ソ�シ医げ繝ェ繝包シ峨r..
matplotlib縺ョpyplot.pl..
險育ョ怜鴨蟄ヲ謚�陦楢��隧ヲ鬨薙�ョ蝠城。碁寔 閾ェ轤奇シ郁」∵妙竊�..
python縺ァ縲√�帙Ρ繧、繝医ヮ繧、繧コ繧�繝斐Φ繧ッ繝�..
閼ウ繝峨ャ繧ー縺ォ陦後▲縺ヲ縺阪◆縲や�樽RI縺ョ逕サ蜒上ョ繝シ..
matplotlib縺ョimshow縺ァ蜀�繧�..
matplotlib縺ョ縲…map繧偵�∝セ舌��..
matplotlib縺ョmake_axes..
matplotlib floatinga..
matplotlib plot縺ョ濶イ繧偵�∝�、..
Python縺ァ縲√�御コ梧ャ。蜈�繝輔�シ繝ェ繧ィ螟画鋤縺励◆..
matplotlib縺ョlinestyle..
縺ゥ縺。繧峨′豁」縺励>RGB縺九�ゑシ�matplot..
matplotlib縺ョannotate縺ョ..
matplotlib縺ァ縲』霆ク縺ィ�ス呵サク縺ョ謨ー蟄�..
VBA縺ァ縲}ython縺ョrange縺ィ縺九��..
matplotlib縺ョaxes3D縺ァ縲‖..
matplotlib縺ョlatex縺ァ縲∬。悟��.. |
I have a pico, but the version of micropython on it is a bit old. Anyways, here's my result:willie wrote:I'd be interested in seeing measurements for Espruino Pico (I like that form factor) if someone has one. The Teensy 3.2 is also of interest.
Code: Select all
MicroPython v1.5.2-21-g824f83f on 2016-01-04; Espruino Pico with STM32F401CD
Type "help()" for more information.
>>> import gc
>>> gc.collect()
>>> gc.mem_free()
54448
Code: Select all
MicroPython v1.8.4 on 2016-09-09; Espruino Pico with STM32F401CD
Type "help()" for more information.
>>> import gc
>>> gc.collect()
>>> gc.mem_free()
54336
>>>
Code: Select all
import machine
import pyb
Code: Select all
MicroPython v1.8.4 on 2016-09-09; Espruino Pico with STM32F401CD
Type "help()" for more information.
>>> import gc
>>> gc.collect()
>>> gc.mem_free()
54352
>>>
Teensy 3.1:
Micro Python v1.4.5 on 2015-08-16; Teensy-3.1 with MK20DX256
Type "help()" for more information.
>>> import gc
>>> gc.collect()
>>> gc.mem_free()
52064
Update:
MicroPython v1.8.4-11-g0fd3d8d-dirty on 2016-09-15; Teensy-3.1 with MK20DX256
Type "help()" for more information.
>>> import gc
>>> gc.collect();gc.mem_free()
52112
Micro Python v1.4.5 on 2015-08-16; Teensy-3.1 with MK20DX256
Type "help()" for more information.
>>> import gc
>>> gc.collect()
>>> gc.mem_free()
52064
Update:
MicroPython v1.8.4-11-g0fd3d8d-dirty on 2016-09-15; Teensy-3.1 with MK20DX256
Type "help()" for more information.
>>> import gc
>>> gc.collect();gc.mem_free()
52112
In the course of another topic Damien epxlained that WiPy requires the code to be loaded into RAM and executed there. Therefore most of the RAM is reserved for the MicroPython code itself, leaving only 56k for heap.
On a LoPy v1.0 board with stock firmware
Code: Select all
>>> import gc
>>> gc.collect()
>>> gc.mem_free()
191088
>>>
Tiny Core Linux (piCore) developer
HAM radio call: HA5DI (Béla)
HAM radio call: HA5DI (Béla) |
Python 3.7ã®æ°æ©è½
Python 3.7ã¯https://www.python.org/downloads/release/python-370/[å ¬å¼ãªãªã¼ã¹]ã§ãï¼ ãã®æ°ããPythonãã¼ã¸ã§ã³ã¯https://www.python.org/dev/peps/pep-0537/[2016å¹´9æ]ããéçºããã¦ãããã³ã¢éçºè ã®åªåã®çµæã享åã§ããããã«ãªãã¾ããã
æ°ããPythonãã¼ã¸ã§ã³ã¯ä½ãããããã¾ããï¼ https://docs.python.org/3.7/whatsnew/3.7.html [ããã¥ã¡ã³ã]ã«ã¯æ°æ©è½ã®æ¦è¦ãè¨è¼ããã¦ãã¾ããããã®è¨äºã§ã¯ãã¥ã¼ã¹ã®å¤§ããªé¨åã«ã¤ãã¦è©³ãã説æãã¾ãã ããããå«ã¾ãã¾ãï¼
æ°ãã `+ breakpointï¼ï¼+`ãã«ãã¤ã³ã«ãããããã¬ã¼ã¸ã®ããç°¡åãªã¢ã¯ã»ã¹
ãã¼ã¿ã¯ã©ã¹ã使ç¨ããç°¡åãªã¯ã©ã¹ä½æ
ã¢ã¸ã¥ã¼ã«å±æ§ã¸ã®ã«ã¹ã¿ãã¤ãºãããã¢ã¯ã»ã¹
ã¿ã¤ããã³ãã£ã³ã°ã®ãµãã¼ãã®æ¹å
é«ç²¾åº¦ã®ã¿ã¤ãã³ã°é¢æ°
ããã«éè¦ãªãã¨ã«ãPython 3.7ã¯é«éã§ãã
ãã®è¨äºã®æå¾ã®ã»ã¯ã·ã§ã³ã§ã¯ããã®é度ã®ã»ããPython 3.7ã®ãã®ä»ã®åªããæ©è½ã«ã¤ãã¦è©³ãã説æãã¾ãã ã¾ããæ°ãããã¼ã¸ã§ã³ã¸ã®ã¢ããã°ã¬ã¼ãã«é¢ããã¢ããã¤ã¹ãå¾ããã¾ãã
`+ breakpointï¼ï¼+`ãã«ãã¤ã³
ç§ãã¡ã¯å®ç§ãªã³ã¼ããæ¸ãããåªåããããããã¾ããããåç´ãªçå®ã¯ç§ãã¡ã決ãã¦ããªãã¨ãããã¨ã§ãã ãããã°ã¯ããã°ã©ãã³ã°ã®éè¦ãªé¨åã§ãã Python 3.7ã§ã¯ãæ°ããçµã¿è¾¼ã¿é¢æ° `+ breakpointï¼ï¼+`ãå°å ¥ããã¦ãã¾ãã ããã¯å®éã«ã¯Pythonã«æ°ããæ©è½ã追å ãããã®ã§ã¯ããã¾ãããããããã¬ã¼ã®ä½¿ç¨ãããæè»ãã¤ç´æçã«ãã¾ãã
ãã¡ã¤ã« `+ bugs.py +`ã«æ¬¡ã®ãã°ã®ããã³ã¼ããããã¨ä»®å®ãã¾ãã
def divide(e, f):
return f/e
a, b = 0, 1
print(divide(a, b))
ã³ã¼ããå®è¡ããã¨ã `+ divideï¼ï¼`颿°å ã§ ` ZeroDivisionError `ãçºçãã¾ãã ã³ã¼ãã䏿ãã¦ã ` divideï¼ï¼+`ã®ä¸çªä¸ã«ããhttps://realpython.com/python-debugging-pdb/[debugger]ã«ããããããã¨ãã¾ãã ãããè¡ãã«ã¯ãã³ã¼ãã«ããããããã¬ã¼ã¯ãã¤ã³ãããè¨å®ãã¾ãã
def divide(e, f):
# Insert breakpoint here
return f/e
ãã¬ã¼ã¯ãã¤ã³ãã¯ãããã°ã©ã å ã®ç¾å¨ã®ç¶æ ã確èªã§ããããã«ãå®è¡ã䏿çã«åæ¢ããå¿ è¦ãããã³ã¼ãå ã®ã·ã°ãã«ã§ãã ãã¬ã¼ã¯ãã¤ã³ããã©ã®ããã«é ç½®ãã¾ããï¼ Python 3.6以ä¸ã§ã¯ããã®ããä¸å¯è§£ãªè¡ã使ç¨ãã¾ãã
def divide(e, f):
import pdb; pdb.set_trace()
return f/e
ããã§ãhttps://docs.python.org/library/pdb.html [+ pdb +]ã¯æ¨æºã©ã¤ãã©ãªã®Pythonãããã¬ã¼ã§ãã Python 3.7ã§ã¯ã代ããã«æ°ãã `+ breakpointï¼ï¼+`颿°å¼ã³åºããã·ã§ã¼ãã«ããã¨ãã¦ä½¿ç¨ã§ãã¾ãã
def divide(e, f):
breakpoint()
return f/e
ããã¯ã°ã©ã¦ã³ãã§ã `+ breakpointï¼ï¼`ã¯æåã« ` pdb `ãã¤ã³ãã¼ãããæ¬¡ã« ` pdb.set_traceï¼ï¼`ãå¼ã³åºãã¾ãã æãããªå©ç¹ã¯ã ` breakpointï¼ï¼`ã®ã»ããè¦ããããã27æåã§ã¯ãªã12æåãå ¥åããã ãã§æ¸ããã¨ã§ãã ãã ãã ` breakpointï¼ï¼+`ã使ç¨ãããã¨ã®æ¬å½ã®ãã¼ãã¹ã¯ããã®ã«ã¹ã¿ãã¤ãºæ§ã§ãã
`+ breakpointï¼ï¼`ã§ ` bugs.py +`ã¹ã¯ãªãããå®è¡ãã¾ãï¼
$ python3.7 bugs.py
>/home/gahjelle/bugs.py(3)divide()
-> return f/e
(Pdb)
ã¹ã¯ãªããã¯ã `+ breakpointï¼ï¼`ã«å°éããã¨ä¸æããPDBãããã°ã»ãã·ã§ã³ã«ãããããã¾ãã ã c +ãã¨å ¥åãã¦[.keys]ï¼Enterï¼ãæ¼ãã¨ãã¹ã¯ãªãããç¶è¡ã§ãã¾ãã PDBã¨ãããã°ã®è©³ç´°ã«ã¤ãã¦ã¯ãhttps://realpython.com/python-debugging-pdb/[Nathan Jenningsã®PDBã¬ã¤ã]ãåç §ãã¦ãã ããã
次ã«ããã°ãä¿®æ£ããã¨æãã¨è¨ãã¾ãã ãããã¬ã¼ã§åæ¢ããã«ãã¹ã¯ãªãããå度å®è¡ããå¿ è¦ãããã¾ãã ãã¡ããã `+ breakpointï¼ï¼`è¡ãã³ã¡ã³ãã¢ã¦ããããã¨ãã§ãã¾ãããå¥ã®ãªãã·ã§ã³ã¯ ` PYTHONBREAKPOINT `ç°å¢å¤æ°ã使ç¨ãããã¨ã§ãã ãã®å¤æ°ã¯ã ` breakpointï¼ï¼`ã®åä½ãå¶å¾¡ãã ` PYTHONBREAKPOINT = 0 `ãè¨å®ããã¨ã ` breakpointï¼ï¼+`ã®å¼ã³åºãã¯ãã¹ã¦ç¡è¦ããã¾ãã
$ PYTHONBREAKPOINT=0 python3.7 bugs.py
ZeroDivisionError: division by zero
ãã°ãä¿®æ£ãã¦ããªãããã§ãã
å¥ã®ãªãã·ã§ã³ã¯ã `+ PYTHONBREAKPOINT +`ã使ç¨ãã¦PDB以å¤ã®ãããã¬ã¼ãæå®ãããã¨ã§ãã ãã¨ãã°ãhttps://pypi.org/project/pudb/[PuDB]ï¼ã³ã³ã½ã¼ã«ã®ãã¸ã¥ã¢ã«ãããã¬ã¼ï¼ã使ç¨ããã«ã¯ã次ã®ããã«ãã¾ãã
$ PYTHONBREAKPOINT=pudb.set_trace python3.7 bugs.py
ãããæ©è½ããã«ã¯ã + pudb +`ãã¤ã³ã¹ãã¼ã«ããå¿
è¦ãããã¾ãï¼ `+ pip install pudb +ï¼ã ãã ããPythonã `+ pudb `ã®ã¤ã³ãã¼ããå¦çãã¾ãã ãã®æ¹æ³ã§ãããã©ã«ãã®ãããã¬ãè¨å®ãããã¨ãã§ãã¾ãã ` PYTHONBREAKPOINT +`ç°å¢å¤æ°ãã好ã¿ã®ãããã¬ã¼ã«è¨å®ããã ãã§ãã ã·ã¹ãã ã§ç°å¢å¤æ°ãè¨å®ããæ¹æ³ã«ã¤ãã¦ã¯ãhttps://www.schrodinger.com/kb/1842 [ãã®ã¬ã¤ã]ãåç
§ãã¦ãã ããã
æ°ãã `+ breakpointï¼ï¼+`颿°ã¯ããããã¬ã¼ã§ã®ã¿æ©è½ãã¾ããã 便å©ãªãªãã·ã§ã³ã®1ã¤ã¯ãã³ã¼ãå ã§ã¤ã³ã¿ã©ã¯ãã£ãã·ã§ã«ãåç´ã«èµ·åãããã¨ã§ãã ãã¨ãã°ãIPythonã»ãã·ã§ã³ãéå§ããã«ã¯ã次ã使ç¨ã§ãã¾ãã
$ PYTHONBREAKPOINT=IPython.embed python3.7 bugs.py
IPython 6.3.1 -- An enhanced Interactive Python. Type '?' for help.
In [1]: print(e/f)
0.0
ã¾ããç¬èªã®é¢æ°ã使ãã `+ breakpointï¼ï¼`ããããå¼ã³åºãããã«ãããã¨ãã§ãã¾ãã æ¬¡ã®ã³ã¼ãã¯ããã¼ã«ã«ã¹ã³ã¼ãå ã®ãã¹ã¦ã®å¤æ°ãåºåãã¾ãã ` bp_utils.py +`ã¨ãããã¡ã¤ã«ã«è¿½å ãã¾ãï¼
from pprint import pprint
import sys
def print_locals():
caller = sys._getframe(1) # Caller is 1 frame up.
pprint(caller.f_locals)
ãã®é¢æ°ã使ç¨ããã«ã¯ãã+ PYTHONBREAKPOINT ãã以åã¨åæ§ã«ãã <ã¢ã¸ã¥ã¼ã«>ã<颿°> +ã表è¨ã§è¨å®ãã¾ãã
$ PYTHONBREAKPOINT=bp_utils.print_locals python3.7 bugs.py
{'e': 0, 'f': 1}
ZeroDivisionError: division by zero
é常ã弿°ãå¿ è¦ã¨ããªã颿°ãã¡ã½ãããå¼ã³åºãã«ã¯ã `+ breakpointï¼ï¼`ã使ç¨ããã¾ãã ãã ãã弿°ã渡ããã¨ãã§ãã¾ãã ` bugs.py `ã®è¡ ` breakpointï¼ï¼+`ãæ¬¡ã®ããã«å¤æ´ãã¾ãã
breakpoint(e, f, end="<-END\n")
*注æï¼*ããã©ã«ãã®PDBãããã¬ã¼ã¯ããã®è¡ã§ `+ TypeError `ãçºçããã¾ããããã¯ã ` pdb.set_traceï¼ï¼+`ãä½ç½®å¼æ°ãåãåããªãããã§ãã
`+ printï¼ï¼`颿°ãè£ ã£ã ` breakpointï¼ï¼+`ã§ãã®ã³ã¼ããå®è¡ããã¨ã渡ããã弿°ã®ç°¡åãªä¾ãè¦ããã¨ãã§ãã¾ãã
$ PYTHONBREAKPOINT=print python3.7 bugs.py
0 1<-END
ZeroDivisionError: division by zero
PEP 553ããã³https://docs.python.org/3.7/library/functions.html#breakpoint [ã®ããã¥ã¡ã³ããåç
§ãã¦ãã ããã詳細ã«ã¤ãã¦ã¯ã+ breakpointï¼ï¼+ `]ããã³https://docs.python.org/3.7/library/sys.html#sys.breakpointhook [ + sys.breakpointhookï¼ï¼+ `]ãåç
§ãã¦ãã ããã
ãã¼ã¿ã¯ã©ã¹
æ°ããhttps://realpython.com/python-data-classes/[+ dataclasses +]ã¢ã¸ã¥ã¼ã«ã¯ã + .ã init ï¼ï¼++ . + . reprã®ãããªç¹å¥ãªã¡ã½ããã¨ãã¦ãç¬èªã®ã¯ã©ã¹ãæ¸ãã®ããã便å©ã«ãã¾ãã ï¼ï¼+ `ããã³ eq ï¼ï¼+ `ãèªåçã«è¿½å ããã¾ãã `+ @ dataclass +`ãã³ã¬ã¼ã¿ã使ç¨ããã¨ã次ã®ããã«è¨è¿°ã§ãã¾ãã
from dataclasses import dataclass, field
@dataclass(order=True)
class Country:
name: str
population: int
area: float = field(repr=False, compare=False)
coastline: float = 0
def beach_per_person(self):
"""Meters of coastline per person"""
return (self.coastline * 1000)/self.population
ãããã®9è¡ã®ã³ã¼ãã¯ãããªãã®å®åã³ã¼ãã¨ãã¹ããã©ã¯ãã£ã¹ã表ãã¦ãã¾ãã + Country +`ãé常ã®ã¯ã©ã¹ã¨ãã¦å®è£
ããã«ã¯ä½ãå¿
è¦ããèãã¦ãã ããï¼ `+ .ã6ã¤ã®ç°ãªãæ¯è¼ã¡ã½ãããããã³ `+ .beach_per_personï¼ï¼`æ¹æ³ã 以ä¸ã®ããã¯ã¹ãå±éãã¦ããã¼ã¿ã¯ã©ã¹ã¨ã»ã¼åçã® ` Country +`ã®å®è£
ã確èªã§ãã¾ãã init ï¼ï¼+`ã¡ã½ããã `+ repr +
使å¾ããã¼ã¿ã¯ã©ã¹ã¯é常ã®ã¯ã©ã¹ã«ãªãã¾ãã ãã¨ãã°ãéå¸¸ã®æ¹æ³ã§ãã¼ã¿ã¯ã©ã¹ããç¶æ¿ã§ãã¾ãã ãã¼ã¿ã¯ã©ã¹ã®ä¸»ãªç®çã¯ãå ç¢ãªã¯ã©ã¹ãç¹ã«ä¸»ã«ãã¼ã¿ãæ ¼ç´ããå°ããªã¯ã©ã¹ããã°ããç°¡åã«è¨è¿°ã§ããããã«ãããã¨ã§ãã
ä»ã®ã¯ã©ã¹ã¨åæ§ã«ã `+ Country +`ãã¼ã¿ã¯ã©ã¹ã使ç¨ã§ãã¾ãã
>>>
>>> norway = Country("Norway", 5320045, 323802, 58133)
>>> norway
Country(name='Norway', population=5320045, coastline=58133)
>>> norway.area
323802
>>> usa = Country("United States", 326625791, 9833517, 19924)
>>> nepal = Country("Nepal", 29384297, 147181)
>>> nepal
Country(name='Nepal', population=29384297, coastline=0)
>>> usa.beach_per_person()
0.06099946957342386
>>> norway.beach_per_person()
10.927163210085629
ã¯ã©ã¹ãåæåããã¨ãã«ããã¹ã¦ã®ãã£ã¼ã«ãã+ .name ããããpopulation ããããarea ããããã³ã .coastline ãã使ç¨ããããã¨ã«æ³¨æãã¦ãã ããï¼ãã ããããcoastline ãã¯ãå é¸ããã¼ã«ã®ä¾ï¼ã ` Country `ã¯ã©ã¹ã«ã¯åççãªhttps://dbader.org/blog/python-repr-vs-str [` repr +`]ãããã¾ãããã¡ã½ããã®å®ç¾©ã¯é常ã®ã¯ã©ã¹ã¨åãããã«æ©è½ãã¾ãã
ããã©ã«ãã§ã¯ããã¼ã¿ã¯ã©ã¹ã®åçæ§ãæ¯è¼ã§ãã¾ãã `+ @ dataclass `ãã³ã¬ã¼ã¿ã§ ` order = True `ãæå®ããããã ` Country +`ã¯ã©ã¹ãã½ã¼ãã§ãã¾ãï¼
>>>
>>> norway == norway
True
>>> nepal == usa
False
>>> sorted((norway, usa, nepal))
[Country(name='Nepal', population=29384297, coastline=0),
Country(name='Norway', population=5320045, coastline=58133),
Country(name='United States', population=326625791, coastline=19924)]
ã½ã¼ãã¯ããã£ã¼ã«ãå¤ã§è¡ãããæåã«ã+ .name ããæ¬¡ã«ã .population ããªã©ã«ãªãã¾ãã ãã ãã ` fieldï¼ï¼`ã使ç¨ããå ´åã¯ãhttps://realpython.com/python-data-classes/#advanced-default-values [customize]ã§æ¯è¼ã§ä½¿ç¨ãããã£ã¼ã«ããæå®ã§ãã¾ãã ãã®ä¾ã§ã¯ã ` .area `ãã£ã¼ã«ã㯠` repr +`ã¨æ¯è¼ããé¤å¤ããã¾ããã
*注ï¼*å½ã®ãã¼ã¿ã¯https://www.cia.gov/library/publications/the-world-factbook/[CIA World Factbook]ããã®ãã®ã§ã2017å¹´7æã®æ¨å®äººå£æ°ã§ãã
ãã«ã¦ã§ã¼ã§ã®æ¬¡ã®æµ·è¾ºã®ä¼æãäºç´ããåã«ããã¡ã¯ãããã¯ãhttps://www.cia.gov/library/publications/the-world-factbook/geos/no.html [ãã«ã¦ã§ã¼ã®æ°å]ã«ã¤ãã¦æ¬¡ã®ããã«è¿°ã¹ã¦ãã¾ãã ãåå¤§è¥¿æ´æµ·æµã«ãã£ã¦ä¿®æ£ãããæµ·å²¸æ²¿ãã®æ¸©å¸¯ãéæ°´éãå¢å ããå¤ãããå¯ããªãå¯ã室å ;西海岸ã§ã¯ä¸å¹´ä¸é¨ãéãã¾ããã
ãã¼ã¿ã¯ã©ã¹ã¯https://dbader.org/blog/writing-clean-python-with-namedtuples [+ namedtuple +]ã¨åããã¨ãããã¤ãè¡ãã¾ãã ããã§ããå½¼ãã¯http://www.attrs.org/[`+ attrs +`ããã¸ã§ã¯ã]ããæå¤§ã®ã¤ã³ã¹ãã¬ã¼ã·ã§ã³ãå¾ã¦ãã¾ãã ãã®ä»ã®ä¾ã¨è©³ç´°ã«ã¤ãã¦ã¯ãhttps://realpython.com/python-data-classes/[ãã¼ã¿ã¯ã©ã¹ã®å®å
¨ã¬ã¤ã]ã¨https://www.python.org/dev/peps/pep-0557ãã覧ãã ããã/[PEP 557]å
¬å¼ã®èª¬æã
ã¢ã¸ã¥ã¼ã«å±æ§ã®ã«ã¹ã¿ãã¤ãº
屿§ã¯Pythonã®ã©ãã«ã§ãããã¾ãï¼ ã¯ã©ã¹å±æ§ã¯ããããæãæåã§ãããå®éã«ã¯å±æ§ã¯ã颿°ãã¢ã¸ã¥ã¼ã«ãªã©ãæ¬è³ªçã«ä½ã«ã§ãé ç½®ã§ãã¾ãã Pythonã®åºæ¬æ©è½ã®ããã¤ãã¯ã屿§ã¨ãã¦å®è£ ããã¾ããã»ã¨ãã©ã®ã¤ã³ããã¹ãã¯ã·ã§ã³æ©è½ãããã¥ã¡ã³ãæååãåå空éã§ãã ã¢ã¸ã¥ã¼ã«å ã®é¢æ°ã¯ãã¢ã¸ã¥ã¼ã«å±æ§ã¨ãã¦å©ç¨å¯è½ã«ãªãã¾ãã
屿§ã¯ãã»ã¨ãã©ã®å ´åãããã表è¨ã+ thing.attribute ãã使ç¨ãã¦åå¾ããã¾ãã ãã ãã ` getattrï¼ï¼+`ã使ç¨ãã¦ãå®è¡æã«ååãä»ãããã屿§ãåå¾ãããã¨ãã§ãã¾ãã
import random
random_attr = random.choice(("gammavariate", "lognormvariate", "normalvariate"))
random_func = getattr(random, random_attr)
print(f"A {random_attr} random value: {random_func(1, 1)}")
ãã®ã³ã¼ããå®è¡ããã¨ã次ã®ãããªãã®ãçæããã¾ãã
A gammavariate random value: 2.8017715125270618
ã¯ã©ã¹ã®å ´åã `+ thing.attr `ãå¼ã³åºãã¨ãæåã« ` thing `ã§å®ç¾©ããã¦ãã ` attr `ãæ¤ç´¢ããã¾ãã è¦ã¤ãããªãå ´åã¯ãç¹å¥ãªã¡ã½ãã ` thing . getattr ï¼" attr "ï¼`ãå¼ã³åºããã¾ãã ï¼ããã¯åç´åã§ãã 詳細ã«ã¤ãã¦ã¯ãhttpï¼//blog.lerner.co.il/python-attributes/[ãã®è¨äº]ãåç
§ãã¦ãã ãããï¼ ` . getattr ï¼ï¼+`ã¡ã½ããã使ç¨ãã¦ããªãã¸ã§ã¯ãã®å±æ§ã¸ã®ã¢ã¯ã»ã¹ãã«ã¹ã¿ãã¤ãºã§ãã¾ãã
Python 3.7ã¾ã§ã¯ãã¢ã¸ã¥ã¼ã«å±æ§ã«å¯¾ãã¦åãã«ã¹ã¿ãã¤ãºãç°¡åã«å©ç¨ã§ãã¾ããã§ããã ãã ããhttps://www.python.org/dev/peps/pep-0562/[PEP 562]ã§ã¯ã対å¿ãã + + dirï¼ï¼+` on a module]ãå¼ã³åºããçµæãã«ã¹ã¿ãã¤ãºã§ãã¾ãã dir ï¼ï¼+`颿°ã¨ã¨ãã«ãã¢ã¸ã¥ã¼ã«ã« `+ getattr ï¼ï¼+`ãå°å
¥ãã¦ãã¾ãã `+ dir ï¼ï¼+`ç¹æ®é¢æ°ã使ç¨ããã¨ãhttps://realpython.com/python-modules-packages/#the-dir-function [
PEPèªä½ã¯ããããã®é¢æ°ã®ä½¿ç¨æ¹æ³ã®ããã¤ãã®ä¾ã示ãã¦ãã¾ããããã«ã¯ã颿°ã¸ã®éæ¨å¥¨è¦åã®è¿½å ããéããµãã¢ã¸ã¥ã¼ã«ã®é å»¶èªã¿è¾¼ã¿ãå«ã¾ãã¾ãã 以ä¸ã§ã¯ã颿°ãã¢ã¸ã¥ã¼ã«ã«åçã«è¿½å ã§ããåç´ãªãã©ã°ã¤ã³ã·ã¹ãã ãæ§ç¯ãã¾ãã ãã®ä¾ã§ã¯ãPythonããã±ã¼ã¸ãå©ç¨ãã¦ãã¾ãã ããã±ã¼ã¸ã®å確èªãå¿ è¦ãªå ´åã¯ãhttps://realpython.com/python-modules-packages/[ãã®è¨äº]ãåç §ãã¦ãã ããã
æ°ãããã£ã¬ã¯ããªã+ plugins ãã使ããæ¬¡ã®ã³ã¼ãããã¡ã¤ã«ã plugins/ init ãpy +ãã«è¿½å ãã¾ãã
from importlib import import_module
from importlib import resources
PLUGINS = dict()
def register_plugin(func):
"""Decorator to register plug-ins"""
name = func.__name__
PLUGINS[name] = func
return func
def __getattr__(name):
"""Return a named plugin"""
try:
return PLUGINS[name]
except KeyError:
_import_plugins()
if name in PLUGINS:
return PLUGINS[name]
else:
raise AttributeError(
f"module {__name__!r} has no attribute {name!r}"
) from None
def __dir__():
"""List available plug-ins"""
_import_plugins()
return list(PLUGINS.keys())
def _import_plugins():
"""Import all resources to register plug-ins"""
for name in resources.contents(__name__):
if name.endswith(".py"):
import_module(f"{__name__}.{name[:-3]}")
ãã®ã³ã¼ããä½ãããããè¦ãåã«ã `+ plugins `ãã£ã¬ã¯ããªå ã«ããã«2ã¤ã®ãã¡ã¤ã«ã追å ãã¾ãã ã¾ãã ` plugins/plugin_1.py +`ãè¦ã¦ã¿ã¾ãããï¼
from . import register_plugin
@register_plugin
def hello_1():
print("Hello from Plugin 1")
次ã«ããã¡ã¤ã« `+ plugins/plugin_2.py +`ã«åæ§ã®ã³ã¼ãã追å ãã¾ãã
from . import register_plugin
@register_plugin
def hello_2():
print("Hello from Plugin 2")
@register_plugin
def goodbye():
print("Plugin 2 says goodbye")
ãããã®ãã©ã°ã¤ã³ã¯æ¬¡ã®ããã«ä½¿ç¨ã§ããããã«ãªãã¾ããã
>>>
>>> import plugins
>>> plugins.hello_1()
Hello from Plugin 1
>>> dir(plugins)
['goodbye', 'hello_1', 'hello_2']
>>> plugins.goodbye()
Plugin 2 says goodbye
ããã¯ãã¹ã¦é©å½çã§ããã¨ã¯æãããªãããããã¾ãããï¼ããããããã§ã¯ããã¾ããï¼ãå®éã«ããã§ä½ãèµ·ãã£ãã®ãè¦ã¦ã¿ã¾ãããã é常ã `+ plugins.hello_1ï¼ï¼`ãå¼ã³åºãã«ã¯ã ` hello_1ï¼ï¼`颿°ã ` plugins `ã¢ã¸ã¥ã¼ã«ã§å®ç¾©ãããã ` init ãpy +`å
ã§æç¤ºçã«ã¤ã³ãã¼ãããå¿
è¦ãããã¾ãplugins + `ããã±ã¼ã¸ã ããã§ã¯ãã©ã¡ãã§ãããã¾ããï¼
代ããã«ãã+ hello_1ï¼ï¼ãã¯ã plugins ãããã±ã¼ã¸å ã®ä»»æã®ãã¡ã¤ã«ã§å®ç¾©ãããã hello_1ï¼ï¼ãã¯ã @ register_plugin + `https://realpython.com/primer-on-python-decorators/[decorator]ã
éãã¯å¾®å¦ã§ãã 使ç¨å¯è½ãªæ©è½ãæç¤ºããããã±ã¼ã¸ã®ä»£ããã«ãåã ã®æ©è½ã¯ããã±ã¼ã¸ã®ä¸é¨ã¨ãã¦ç»é²ããã¾ãã ããã«ããã使ç¨å¯è½ãªé¢æ°ã®éä¸ãªã¹ããä¿æãããã¨ãªããã³ã¼ãã®æ®ãã®é¨åããç¬ç«ãã¦é¢æ°ã追å ã§ããåç´ãªæ§é ãå¾ããã¾ãã
`+ plugins/ init ãpy `ã³ã¼ãå
ã§ã® ` getattr ï¼ï¼`ã®åä½ãç°¡åã«ç¢ºèªãã¾ãããã ` plugins.hello_1ï¼ï¼`ãè¦æ±ããã¨ãPythonã¯æåã« ` plugins/ init ãpy `ãã¡ã¤ã«å
ã§ ` hello_1ï¼ï¼`颿°ãæ¢ãã¾ãã ãã®ãããªé¢æ°ã¯åå¨ããªããããPythonã¯ä»£ããã« ` getattr ï¼" hello_1 "ï¼`ãå¼ã³åºãã¾ãã ` getattr ï¼ï¼+`颿°ã®ã½ã¼ã¹ã³ã¼ããæãåºãã¦ãã ããï¼
def __getattr__(name):
"""Return a named plugin"""
try:
return PLUGINS[name] # 1) Try to return plugin
except KeyError:
_import_plugins() # 2) Import all plugins
if name in PLUGINS:
return PLUGINS[name] # 3) Try to return plugin again
else:
raise AttributeError( # 4) Raise error
f"module {__name__!r} has no attribute {name!r}"
) from None
`+ getattr ï¼ï¼+`ã«ã¯æ¬¡ã®æé ãå«ã¾ãã¾ãã æ¬¡ã®ãªã¹ãã®çªå·ã¯ãã³ã¼ãå
ã®çªå·ä»ãã³ã¡ã³ãã«å¯¾å¿ãã¦ãã¾ãã
æåã«ã颿°ã¯ `+ PLUGINS `è¾æ¸ããååä»ããã©ã°ã¤ã³ã楽観çã«è¿ããã¨ãã¾ãã ` name +`ã¨ããååã®ãã©ã°ã¤ã³ãåå¨ããæ¢ã«ã¤ã³ãã¼ãããã¦ããå ´åãããã¯æåãã¾ãã
ååä»ããã©ã°ã¤ã³ã `+ PLUGINS +`è¾æ¸ã«è¦ã¤ãããªãå ´åããã¹ã¦ã®ãã©ã°ã¤ã³ãã¤ã³ãã¼ãããã¦ãããã¨ã確èªãã¾ãã
ã¤ã³ãã¼ãå¾ã«ååä»ããã©ã°ã¤ã³ã使ç¨å¯è½ã«ãªã£ãå ´åããã®ãã©ã°ã¤ã³ãè¿ãã¾ãã
ãã¹ã¦ã®ãã©ã°ã¤ã³ãã¤ã³ãã¼ãããå¾ããã©ã°ã¤ã³ã `+ PLUGINS `è¾æ¸ã«ãªãå ´åã ` name `ã¯ç¾å¨ã®ã¢ã¸ã¥ã¼ã«ã®å±æ§ï¼ãã©ã°ã¤ã³ï¼ã§ã¯ãªããã¨ã示ã ` AttributeError +`ãçºçããã¾ãã
ããããã+ PLUGINS ãè¾æ¸ã¯ã©ã®ããã«åãè¾¼ã¾ãã¾ããï¼ ` _import_pluginsï¼ï¼`颿°ã¯ ` plugins `ããã±ã¼ã¸å ã®ãã¹ã¦ã®Pythonãã¡ã¤ã«ãã¤ã³ãã¼ããã¾ããã ` PLUGINS +`ã«ã¯è§¦ããªãããã§ãï¼
def _import_plugins():
"""Import all resources to register plug-ins"""
for name in resources.contents(__name__):
if name.endswith(".py"):
import_module(f"{__name__}.{name[:-3]}")
åãã©ã°ã¤ã³é¢æ°ã¯ã `+ @ register_plugin `ãã³ã¬ã¼ã¿ã«ãã£ã¦è£ 飾ããããã¨ãå¿ããªãã§ãã ããã ãã®ãã³ã¬ã¼ã¿ã¯ããã©ã°ã¤ã³ãã¤ã³ãã¼ããããã¨ãã«å¼ã³åºãããå®éã« ` PLUGINS +`ãã£ã¯ã·ã§ããªã«å ¥åããã¾ãã ãã©ã°ã¤ã³ãã¡ã¤ã«ã®1ã¤ãæåã§ã¤ã³ãã¼ãããã¨ãããã確èªã§ãã¾ãã
>>>
>>> import plugins
>>> plugins.PLUGINS
{}
>>> import plugins.plugin_1
>>> plugins.PLUGINS
{'hello_1': <function hello_1 at 0x7f29d4341598>}
ä¾ãç¶ããã¨ãã¢ã¸ã¥ã¼ã«ã§ `+ dirï¼ï¼+`ãå¼ã³åºãã¨ãæ®ãã®ãã©ã°ã¤ã³ãã¤ã³ãã¼ãããããã¨ã«æ³¨æãã¦ãã ããï¼
>>>
>>> dir(plugins)
['goodbye', 'hello_1', 'hello_2']
>>> plugins.PLUGINS
{'hello_1': <function hello_1 at 0x7f29d4341598>,
'hello_2': <function hello_2 at 0x7f29d4341620>,
'goodbye': <function goodbye at 0x7f29d43416a8>}
`+ dirï¼ï¼`ã¯é常ããªãã¸ã§ã¯ãã§å©ç¨å¯è½ãªãã¹ã¦ã®å±æ§ããªã¹ããã¾ãã é常ãã¢ã¸ã¥ã¼ã«ã§ ` dirï¼ï¼+`ã使ç¨ããã¨ã次ã®ãããªçµæã«ãªãã¾ãã
>>>
>>> import plugins
>>> dir(plugins)
['PLUGINS', '__builtins__', '__cached__', '__doc__',
'__file__', '__getattr__', '__loader__', '__name__',
'__package__', '__path__', '__spec__', '_import_plugins',
'import_module', 'register_plugin', 'resources']
ããã¯æç¨ãªæ
å ±ããããã¾ããããå©ç¨å¯è½ãªãã©ã°ã¤ã³ãå
¬éãããã¨ã«é¢å¿ãããã¾ãã Python 3.7ã§ã¯ãã¢ã¸ã¥ã¼ã«ã§ `+ dirï¼ï¼`ãå¼ã³åºããçµæãã«ã¹ã¿ãã¤ãºããã«ã¯ã ` dir ï¼ï¼`ç¹æ®é¢æ°ã追å ãã¾ãã ` plugins/ init ãpy +`ã®å ´åããã®é¢æ°ã¯ã¾ããã¹ã¦ã®ãã©ã°ã¤ã³ãã¤ã³ãã¼ãããã¦ãããã¨ã確èªãã¦ããããããã®ååããªã¹ããã¾ãã
def __dir__():
"""List available plug-ins"""
_import_plugins()
return list(PLUGINS.keys())
ãã®ä¾ãçµäºããåã«ãPython 3.7ã®å¥ã®ã¯ã¼ã«ãªæ°æ©è½ã使ç¨ãããã¨ã«æ³¨æãã¦ãã ããã + plugins +`ãã£ã¬ã¯ããªå
ã®ãã¹ã¦ã®ã¢ã¸ã¥ã¼ã«ãã¤ã³ãã¼ãããããã«ãæ°ããhttps://docs.python.org/3.7/library/importlib.html#module-importlib.resources [+ importlib.resources `]ã¢ã¸ã¥ã¼ã«ã使ç¨ãã¾ããã ãã®ã¢ã¸ã¥ã¼ã«ã¯ãã¢ã¸ã¥ã¼ã«ã¨ããã±ã¼ã¸å
ã®ãã¡ã¤ã«ã¨ãªã½ã¼ã¹ã¸ã®ã¢ã¯ã»ã¹ãæä¾ãã ` file `ããã¯ï¼å¸¸ã«åä½ããã¨ã¯éãã¾ããï¼ã ` pkg_resources `ï¼é
ãï¼ãå¿
è¦ã¨ãã¾ããã ` importlib.resources +`ã®ä»ã®æ©è½ã¯linkï¼ï¼other-pretty-cool-features [å¾ã§å¼·èª¿è¡¨ç¤º]ã«ãªãã¾ãã
å ¥åã®æ©è½å¼·å
åã®ãã³ãã¨æ³¨éã¯ãPython 3ã·ãªã¼ãºã®ãªãªã¼ã¹å ¨ä½ã§çµ¶ããéçºããã¦ãã¾ãã Pythonã®ã¿ã¤ãã³ã°ã·ã¹ãã ã¯ç¾å¨é常ã«å®å®ãã¦ãã¾ãã ããã§ããPython 3.7ã§ã¯ãããã©ã¼ãã³ã¹ã®åä¸ãã³ã¢ãµãã¼ããåæ¹åç §ãªã©ãããã¤ãã®æ©è½å¼·åããã¼ãã«ã«å ãããã¦ãã¾ãã
Pythonã¯å®è¡æã«åãã§ãã¯ãè¡ãã¾ããï¼https://pypi.org/project/enforce/[+ enforce +]ãªã©ã®ããã±ã¼ã¸ãæç¤ºçã«ä½¿ç¨ãã¦ããå ´åãé¤ãï¼ã ãããã£ã¦ãã¿ã¤ããã³ããã³ã¼ãã«è¿½å ãã¦ãããã®ããã©ã¼ãã³ã¹ã«å½±é¿ã¯ããã¾ããã
æ®å¿µãªãããã»ã¨ãã©ã®ã¿ã¤ããã³ãã«ã¯ `+ typing `ã¢ã¸ã¥ã¼ã«ãå¿ è¦ãªã®ã§ãããã¯å®å ¨ã«çå®ã§ã¯ããã¾ããã ` typing `ã¢ã¸ã¥ã¼ã«ã¯ãæ¨æºã©ã¤ãã©ãªã®https://www.python.org/dev/peps/pep-0560/#performance[slowest modules]ã®1ã¤ã§ãã https://www.python.org/dev/peps/pep-0560[PEP 560]ã¯ãPython 3.7ã§å ¥åããããã®ã³ã¢ãµãã¼ãã追å ãã¾ããããã«ããã ` typing +`ã¢ã¸ã¥ã¼ã«ãå¤§å¹ ã«é«éåããã¾ãã ãã®è©³ç´°ã¯ãä¸è¬ã«ç¥ãå¿ è¦ã¯ããã¾ããã å¾ãã«å¾ãã¦ãããã©ã¼ãã³ã¹ã®åä¸ããæ¥½ãã¿ãã ããã
Pythonã®åã·ã¹ãã ã¯é©åº¦ã«è¡¨ç¾åãããã¾ãããå¤å°ã®çã¿ãå¼ãèµ·ããåé¡ã®1ã¤ã¯åæ¹åç §ã§ãã åã®ãã³ãï¼ããä¸è¬çã«ã¯æ³¨éï¼ã¯ãã¢ã¸ã¥ã¼ã«ã®ã¤ã³ãã¼ãä¸ã«è©ä¾¡ããã¾ãã ãããã£ã¦ã使ç¨ããåã«ãã¹ã¦ã®ååãå®ç¾©ãã¦ããå¿ è¦ãããã¾ãã 以ä¸ã¯ä¸å¯è½ã§ãã
class Tree:
def __init__(self, left: Tree, right: Tree) -> None:
self.left = left
self.right = right
ã³ã¼ããå®è¡ããã¨ãã¯ã©ã¹ `+ Tree `ã ` . init ï¼ï¼`ã¡ã½ããã®å®ç¾©ã§ã¾ã ï¼å®å
¨ã«ï¼å®ç¾©ããã¦ããªãããã ` NameError +`ãçºçãã¾ãã
Traceback (most recent call last):
File "tree.py", line 1, in <module>
class Tree:
File "tree.py", line 2, in Tree
def __init__(self, left: Tree, right: Tree) -> None:
NameError: name 'Tree' is not defined
ãããå æããã«ã¯ã代ããã«æååãªãã©ã«ã¨ã㦠`" Tree "`ãè¨è¿°ããå¿ è¦ãããã¾ãã
class Tree:
def __init__(self, left: "Tree", right: "Tree") -> None:
self.left = left
self.right = right
å ã®è°è«ã«ã¤ãã¦ã¯ãhttps://www.python.org/dev/peps/pep-0484/#forward-references [PEP 484]ãåç §ãã¦ãã ããã
å°æ¥ã®http://www.curiousefficiency.org/posts/2014/08/python-4000.html[Python 4.0]ã§ã¯ããã®ãããªããããåæ¹åç
§ã許å¯ããã¾ãã ããã¯ãæç¤ºçã«è¦æ±ãããã¾ã§æ³¨éãè©ä¾¡ããªããã¨ã§å¦çããã¾ãã PEP 563ã¯ããã®ææ¡ã®è©³ç´°ã説æãã¦ãã¾ãã Python 3.7ã§ã¯ãåæ¹åç
§ã¯https://docs.python.org/library/future.html [+ import]ã¨ãã¦æ¢ã«å©ç¨å¯è½ã§ãã æ¬¡ã®ããã«è¨è¿°ã§ãã¾ãã future +
from __future__ import annotations
class Tree:
def __init__(self, left: Tree, right: Tree) -> None:
self.left = left
self.right = right
åãã³ããå®è¡ãããªããããå¤å°ä¸å¨ç¨ãª +" Tree "+`æ§æãåé¿ãããã¨ã«å ãã¦ãå»¶æãããæ³¨éã®è©ä¾¡ãã³ã¼ããé«éåãããã¨ã«æ³¨æãã¦ãã ããã åæ¹åç
§ã¯ãhttpï¼//mypy-lang.org/[+ mypy +`]ã§æ¢ã«ãµãã¼ãããã¦ãã¾ãã
注éã®æãä¸è¬çãªä½¿ç¨æ³ã¯ãã¿ã¤ããã³ãã£ã³ã°ã§ãã ããã§ããå®è¡æã«æ³¨éã«å®å ¨ã«ã¢ã¯ã»ã¹ã§ããå¿ è¦ã«å¿ãã¦ä½¿ç¨ã§ãã¾ãã ã¢ããã¼ã·ã§ã³ãç´æ¥å¦çããå ´åã¯ãå¯è½ãªåæ¹åç §ãæç¤ºçã«å¦çããå¿ è¦ãããã¾ãã
注éãè©ä¾¡ãããã¿ã¤ãã³ã°ã示ããæããã«é¦¬é¹¿ããä¾ã使ãã¦ã¿ã¾ãããã ã¾ããå¤ãã¹ã¿ã¤ã«ã§è¡ããããæ³¨éã¯ã¤ã³ãã¼ãæã«è©ä¾¡ããã¾ãã `+ anno.py +`ã«æ¬¡ã®ã³ã¼ããå«ãã¾ãï¼
def greet(name: print("Now!")):
print(f"Hello {name}")
`+ name `ã®æ³¨é㯠` printï¼ï¼+`ã§ãããã¨ã«æ³¨æãã¦ãã ããã ããã¯ã注éãè©ä¾¡ãããã¿ã¤ãã³ã°ãæ£ç¢ºã«ç¢ºèªããããã§ãã æ°ããã¢ã¸ã¥ã¼ã«ãã¤ã³ãã¼ããã¾ãã
>>>
>>> import anno
Now!
>>> anno.greet.__annotations__
{'name': None}
>>> anno.greet("Alice")
Hello Alice
ã覧ã®ã¨ãããæ³¨éã¯ã¤ã³ãã¼ãæã«è©ä¾¡ããã¾ããã `+ name `㯠` printï¼ï¼`ã®æ»ãå¤ã§ããããã ` None +`ã§æ³¨éä»ãããããã¨ã«æ³¨æãã¦ãã ããã
`+ future +`ã¤ã³ãã¼ãã追å ãã¦ã注éã®å»¶æãããè©ä¾¡ãæå¹ã«ãã¾ãã
from __future__ import annotations
def greet(name: print("Now!")):
print(f"Hello {name}")
ãã®æ´æ°ãããã³ã¼ããã¤ã³ãã¼ããã¦ããæ³¨éã¯è©ä¾¡ããã¾ããã
>>>
>>> import anno
>>> anno.greet.__annotations__
{'name': "print('Now!')"}
>>> anno.greet("Marty")
Hello Marty
`+ Nowï¼`ã¯æ±ºãã¦åºåããããæ³¨é㯠` annotations `è¾æ¸ã®æååãªãã©ã«ã¨ãã¦ä¿æããããã¨ã«æ³¨æãã¦ãã ããã æ³¨éãè©ä¾¡ããã«ã¯ã ` typing.get_type_hintsï¼ï¼`ã¾ã㯠` evalï¼ï¼+`ã使ç¨ãã¾ãã
>>>
>>> import typing
>>> typing.get_type_hints(anno.greet)
Now!
{'name': <class 'NoneType'>}
>>> eval(anno.greet.__annotations__["name"])
Now!
>>> anno.greet.__annotations__
{'name': "print('Now!')"}
`+ annotations +`è¾æ¸ã¯æ±ºãã¦æ´æ°ãããªãããã使ç¨ãããã³ã«ã¢ããã¼ã·ã§ã³ãè©ä¾¡ããå¿
è¦ãããã¾ãã
ã¿ã¤ãã³ã°ç²¾åº¦
Python 3.7ã§ã¯ãhttps://docs.python.org/library/time.html [+ time +]ã¢ã¸ã¥ã¼ã«ã¯https://www.python.org/dev/peps/pep-ã§èª¬æããã¦ããããã¤ãã®æ°ããæ©è½ãåå¾ãã¾ã0564/[PEP 564]ã ç¹ã«ã次ã®6ã¤ã®æ©è½ã追å ããã¦ãã¾ãã
*
+ clock_gettime_nsï¼ï¼+ï¼*æå®ãããæè¨ã®æå»ãè¿ãã¾ã
*
+ clock_settime_nsï¼ï¼+ï¼*æå®ããæè¨ã®æå»ãè¨å®ãã¾ã
*
+ monotonic_nsï¼ï¼+ï¼*徿¹ã«ç§»åã§ããªãç¸å¯¾æè¨ã®æéãè¿ãã¾ãï¼ãã¨ãã°å¤æéã®ããï¼
ï¼ããã©ã¼ãã³ã¹ã«ã¦ã³ã¿ã¼ï¼çãééãæ¸¬å®ããããã«ç¹å¥ã«è¨è¨ãããã¯ããã¯ï¼ã®å¤ãè¿ãã¾ãã+ perf_counter_nsï¼ï¼+
*
+ process_time_nsï¼ï¼+ï¼*ç¾å¨ã®ããã»ã¹ã®ã·ã¹ãã ã¨ã¦ã¼ã¶ã¼CPUæéã®åè¨ãè¿ãã¾ãï¼ã¹ãªã¼ãæéã¯å«ã¾ãã¾ããï¼
ï¼1970å¹´1æ1æ¥ããã®ããç§æ°ãè¿ãã¾ã+ time_nsï¼ï¼+
ããæå³ã§ã¯ã追å ãããæ°ããæ©è½ã¯ããã¾ããã å颿°ã¯ã `+ _ns `æ¥å°¾è¾ã®ãªãæ¢åã®é¢æ°ã«ä¼¼ã¦ãã¾ãã éãã¯ãæ°ãã颿°ãã float ãã¨ãã¦ã®ç§æ°ã§ã¯ãªããã int +ãã¨ãã¦æ°ããç§ãè¿ããã¨ã§ãã
ã»ã¨ãã©ã®ã¢ããªã±ã¼ã·ã§ã³ã§ã¯ããããã®æ°ããããç§æ©è½ã¨å¤ãåçã®æ©è½ã¨ã®éãã¯ãã¾ãèªèããã¾ããã ãã ããæ°ãã颿°ã¯ãã+ float ãã§ã¯ãªãã int +ãã«ä¾åãã¦ãããããæ¨è«ã容æã§ãã æµ®åå°æ°ç¹æ°ã¯https://docs.python.org/tutorial/floatingpoint.html [æ¬è³ªçã«ä¸æ£ç¢º]ã§ãã
>>>
>>> 0.1 + 0.1 + 0.1
0.30000000000000004
>>> 0.1 + 0.1 + 0.1 == 0.3
False
ããã¯Pythonã®åé¡ã§ã¯ãªããæéã®ãããæ°ã使ç¨ãã¦ç¡éã®10鲿°ã表ç¾ããå¿ è¦ãããã³ã³ãã¥ã¼ã¿ã¼ã®çµæã§ãã
Pythonã® + float +`ã¯https://en.wikipedia.org/wiki/IEEE_754[IEEE 754æ¨æº]ã«å¾ãã53ãããã®æå¹ãããã使ç¨ãã¾ãã ãã®çµæãç´104æ¥ï¼2âµÂ³ã¾ãã¯ç´https://en.wikipedia.org/wiki/Names_of_large_numbers[9å
ããç§]ï¼ãè¶
ããæéã¯ãããç§ç²¾åº¦ã®æµ®åå°æ°ç¹æ°ã¨ãã¦è¡¨ç¾ã§ãã¾ããã 対ç
§çã«ãPython https://stackoverflow.com/a/9860611 [+ int +` is unlimited]ã§ãããããæ´æ°å¤ã®ããç§ã¯ãæéå¤ã«é¢ä¿ãªã常ã«ããç§ã®ç²¾åº¦ãæã¡ã¾ãã
ä¾ã¨ãã¦ã `+ time.timeï¼ï¼`ã¯1970å¹´1æ1æ¥ããã®ç§æ°ãè¿ãã¾ãã ãã®æ°å¤ã¯ãã§ã«é常ã«å¤§ããããããã®æ°å¤ã®ç²¾åº¦ã¯ãã¤ã¯ãç§ã¬ãã«ã§ãã ãã®é¢æ°ã¯ã ` _ns `ãã¼ã¸ã§ã³ã§æå¤§ã®æ¹åã示ãã¦ãã¾ãã ` time.time_nsï¼ï¼`ã®è§£å度ã¯ã ` time.timeï¼ï¼+`ãããhttps://www.python.org/dev/peps/pep-0564/#analysis[3å]ç¨åº¦ã§ãã ã
ã¨ããã§ããç§ã¨ã¯ä½ã§ããï¼ æè¡çã«ã¯ã10ååã®1ç§ãã¾ãã¯ç§å¦è¡¨è¨æ³ã好ãå ´åã¯ã+ 1e-9 +ãç§ã§ãã ãã ãããããã¯åãªãæ°åã§ãããå®éã«ã¯ç´æãæä¾ãã¾ããã ããåªããè¦è¦è³æã«ã¤ãã¦ã¯ãhttps://en.wikipedia.org/wiki/Grace_Hopper#Anecdotes [Grace Hopperâs]ã®ç´ æ´ãããhttps://www.youtube.com/watch?v=JEpsKnWZrJ8 [ããç§ã®ãã¢]ãã覧ãã ããã
ä½è«ã§ãããããç§ã®ç²¾åº¦ã§æ¥æãæä½ããå¿ è¦ãããå ´åã `+ datetime +`æ¨æºã©ã¤ãã©ãªã¯ãããã«ãããã¾ããã æç¤ºçã«ã¯ãã¤ã¯ãç§ã®ã¿ãå¦çãã¾ãã
>>>
>>> from datetime import datetime, timedelta
>>> datetime(2018, 6, 27) + timedelta(seconds=1e-6)
datetime.datetime(2018, 6, 27, 0, 0, 0, 1)
>>> datetime(2018, 6, 27) + timedelta(seconds=1e-9)
datetime.datetime(2018, 6, 27, 0, 0)
代ããã«ãhttpï¼//www.astropy.org/[+ astropy +`ããã¸ã§ã¯ã]ã使ç¨ã§ãã¾ãã ãã®http://docs.astropy.org/en/stable/time/[+ astropy.time `]ããã±ã¼ã¸ã¯ã2ã¤ã® ` float +`ãªãã¸ã§ã¯ãã使ç¨ãã¦æ¥æã表ãã¾ãã ã
>>>
>>> from astropy.time import Time, TimeDelta
>>> Time("2018-06-27")
<Time object: scale='utc' format='iso' value=2018-06-27 00:00:00.000>
>>> t = Time("2018-06-27") + TimeDelta(1e-9, format="sec")
>>> (t - Time("2018-06-27")).sec
9.976020010071807e-10
`+ astropy +`ã®ææ°ãã¼ã¸ã§ã³ã¯ãPython 3.5以éã§å©ç¨ã§ãã¾ãã
ãã®ä»ã®ããªãã¯ã¼ã«ãªæ©è½
ããã¾ã§ãPython 3.7ã®æ°æ©è½ã«é¢ãããããã©ã¤ã³ãã¥ã¼ã¹ãè¦ã¦ãã¾ããã ãã ããä»ã«ãå¤ãã®å¤æ´ç¹ããããããããããªãã¯ã¼ã«ã§ãã ãã®ã»ã¯ã·ã§ã³ã§ã¯ããããã®ããã¤ããç°¡åã«è¦ã¦ããã¾ãã
è¾æ¸ã®é åºã¯ä¿è¨¼ããã¦ãã¾ã
Python 3.6ã®CPythonå®è£ ã§ã¯ãè¾æ¸ã並ã¹ã¦ãã¾ãã ï¼http://pypy.org/[PyPy]ã«ããããããã¾ããï¼ããã¯ãè¾æ¸ã®ã¢ã¤ãã ãæ¿å ¥ãããã®ã¨åãé åºã§ç¹°ãè¿ããããã¨ãæå³ãã¾ãã æåã®ä¾ã¯Python 3.5ã使ç¨ãã2çªç®ã®ä¾ã¯Python 3.6ã使ç¨ãã¦ãã¾ãã
>>>
>>> {"one": 1, "two": 2, "three": 3} # Python <= 3.5
{'three': 3, 'one': 1, 'two': 2}
>>> {"one": 1, "two": 2, "three": 3} # Python >= 3.6
{'one': 1, 'two': 2, 'three': 3}
Python 3.6ã§ã¯ããã®é åºä»ã㯠`+ dict +`ã®å®è£ ã®è¯ãçµæã§ããã ãã ããPython 3.7ã§ã¯ãæ¿å ¥é åºãä¿æããè¾æ¸ã¯https://mail.python.org/pipermail/python-dev/2017-December/151283.html [è¨èªä»æ§]ã®ä¸é¨ã§ãã ãã®ãããPython> = 3.7ï¼ã¾ãã¯CPython> = 3.6ï¼ã®ã¿ããµãã¼ãããããã¸ã§ã¯ãã«ä¾åããããã«ãªãã¾ããã
ã + async +ãããã³ã + await +ãã¯ãã¼ã¯ã¼ãã§ã
Python 3.5ã¯https://www.python.org/dev/peps/pep-0492/[ãasyncãããã³ã+ await ãæ§æã®ã³ã«ã¼ãã³ï¼ãå°å ¥ãã¾ããã 徿¹äºææ§ã®åé¡ãåé¿ããããã«ãäºç´ãã¼ã¯ã¼ãã®ãªã¹ãã«ã async ãã¨ã await ãã¯è¿½å ããã¾ããã§ããã è¨ãæããã°ã ` async `ããã³ ` await +`ã¨ããååã®å¤æ°ã¾ãã¯é¢æ°ãå®ç¾©ãããã¨ã¯ä¾ç¶ã¨ãã¦å¯è½ã§ããã
Python 3.7ã§ã¯ãããã¯ãã¯ãä¸å¯è½ã§ãã
>>>
>>> async = 1
File "<stdin>", line 1
async = 1
^
SyntaxError: invalid syntax
>>> def await():
File "<stdin>", line 1
def await():
^
SyntaxError: invalid syntax
ã + asyncio +ããã§ã¤ã¹ãªãã
`+ asyncio +`æ¨æºã©ã¤ãã©ãªã¯ãã¨ãã¨Python 3.4ã§å°å ¥ãããã¤ãã³ãã«ã¼ããã³ã«ã¼ãã³ããã¥ã¼ãã£ã¼ã使ç¨ãã¦ç¾ä»£çãªæ¹æ³ã§ä¸¦è¡æ§ãå¦çãã¾ãã https://hackernoon.com/asyncio-for-the-working-python-developer-5c468e6e2e8e [ããããç´¹ä»]ãã覧ãã ããã
Python 3.7ã§ã¯ã + asyncio +`ã¢ã¸ã¥ã¼ã«ã¯https://docs.python.org/3.7/whatsnew/3.7.html#asyncio [ã¡ã¸ã£ã¼ãã§ã¤ã¹ãªãã]ãåå¾ãã¦ãã¾ããããã«ã¯ãå¤ãã®æ°ãã颿°ãã³ã³ããã¹ã夿°ã®ãµãã¼ããå«ã¾ãã¾ãï¼ linkï¼ï¼context-variables [below]ï¼ãããã³ããã©ã¼ãã³ã¹ã®æ¹åã ç¹ã«æ³¨æãã¹ãã¯ã `+ asyncio.runï¼ï¼+`ã§ããããã«ãããåæã³ã¼ãããã®ã³ã«ã¼ãã³ã®å¼ã³åºããç°¡åã«ãªãã¾ãã https://docs.python.org/3.7/library/asyncio-task.html#asyncio.run [+ asyncio.runï¼ï¼+`]ã使ç¨ããã¨ãã¤ãã³ãã«ã¼ããæç¤ºçã«ä½æããå¿
è¦ã¯ããã¾ããã éåæHello Worldããã°ã©ã ã使ã§ããããã«ãªãã¾ããã
import asyncio
async def hello_world():
print("Hello World!")
asyncio.run(hello_world())
ã³ã³ããã¹ã夿°
ã³ã³ããã¹ã夿°ã¯ãã³ã³ããã¹ãã«å¿ãã¦ç°ãªãå¤ãæã¤ãã¨ãã§ãã夿°ã§ãã ãããã¯ãåå®è¡ã¹ã¬ããã夿°ã«å¯¾ãã¦ç°ãªãå¤ãæã¤å¯è½æ§ãããã¹ã¬ãããã¼ã«ã«ã¹ãã¬ã¼ã¸ã«ä¼¼ã¦ãã¾ãã ãã ããã³ã³ããã¹ã夿°ã§ã¯ã1ã¤ã®å®è¡ã¹ã¬ããã«è¤æ°ã®ã³ã³ããã¹ããåå¨ããå ´åãããã¾ãã ã³ã³ããã¹ã夿°ã®ä¸»ãªä½¿ç¨ä¾ã¯ãåæéåæã¿ã¹ã¯ã®å¤æ°ã追跡ãããã¨ã§ãã
次ã®ä¾ã§ã¯ã3ã¤ã®ã³ã³ããã¹ãã使ããããããã«å¤ã+ name ãã®ç¬èªã®å¤ãè¨å®ãã¾ãã ` greetï¼ï¼`颿°ã¯ãå¾ã§åã³ã³ããã¹ãå ã§ ` name +`ã®å¤ã使ç¨ã§ãã¾ãã
import contextvars
name = contextvars.ContextVar("name")
contexts = list()
def greet():
print(f"Hello {name.get()}")
# Construct contexts and set the context variable name
for first_name in ["Steve", "Dina", "Harry"]:
ctx = contextvars.copy_context()
ctx.run(name.set, first_name)
contexts.append(ctx)
# Run greet function inside each context
for ctx in reversed(contexts):
ctx.run(greet)
ãã®ã¹ã¯ãªãããå®è¡ããã¨ãSteveãDinaãããã³Harryã®é åºãéã«ãªãã¾ãã
$ python3.7 context_demo.pyHello HarryHello DinaHello Steve
ã + importlib.resources +ãã使ç¨ãããã¼ã¿ãã¡ã¤ã«ã®ã¤ã³ãã¼ã
Pythonããã¸ã§ã¯ããããã±ã¼ã¸åããã¨ãã®1ã¤ã®èª²é¡ã¯ãããã¸ã§ã¯ãã«å¿ è¦ãªãã¼ã¿ãã¡ã¤ã«ãªã©ã®ããã¸ã§ã¯ããªã½ã¼ã¹ãã©ãããããæ±ºå®ãããã¨ã§ãã ããã¤ãã®ãªãã·ã§ã³ãä¸è¬çã«ä½¿ç¨ããã¦ãã¾ãï¼
ãã¼ã¿ãã¡ã¤ã«ã¸ã®ãã¹ããã¼ãã³ã¼ããã¾ãã
ãã¼ã¿ãã¡ã¤ã«ãããã±ã¼ã¸å ã«é ç½®ãã `+
file+`ã使ç¨ãã¦æ¤ç´¢ãã¾ãã
https://setuptools.readthedocs.io/en/latest/pkg_resources.html [
+ setuptools.pkg_resources +]ã使ç¨ãã¦ããã¼ã¿ãã¡ã¤ã«ãªã½ã¼ã¹ã«ã¢ã¯ã»ã¹ãã¾ãã
ãããã«ã¯ããããæ¬ ç¹ãããã¾ãã æåã®ãªãã·ã§ã³ã¯ç§»æ¤æ§ãããã¾ããã `+ file `ã使ç¨ããã¨ããç§»æ¤æ§ãé«ããªãã¾ãããPythonããã¸ã§ã¯ããã¤ã³ã¹ãã¼ã«ããã¦ããå ´åã¯ãzipå
ã«é
ç½®ããã ` file +`屿§ããªãå ´åãããã¾ãã 3çªç®ã®ãªãã·ã§ã³ã¯ãã®åé¡ã解決ãã¾ãããæ®å¿µãªããé常ã«é
ãã§ãã
ããè¯ã解決çã¯ãæ¨æºã©ã¤ãã©ãªã®æ°ããhttps://docs.python.org/3.7/library/importlib.html#module-importlib.resources [+ importlib.resources +]ã¢ã¸ã¥ã¼ã«ã§ãã Pythonã®æ¢åã®ã¤ã³ãã¼ãæ©è½ã使ç¨ãã¦ããã¼ã¿ãã¡ã¤ã«ãã¤ã³ãã¼ããã¾ãã æ¬¡ã®ãããªPythonããã±ã¼ã¸å
ã«ãªã½ã¼ã¹ãããã¨ä»®å®ãã¾ãã
data/ â âââ alice_in_wonderland.txt âââ __init__.py
`+ data `ã¯https://realpython.com/python-modules-packages/[Pythonããã±ã¼ã¸]ã§ããå¿
è¦ããããã¨ã«æ³¨æãã¦ãã ããã ã¤ã¾ãããã£ã¬ã¯ããªã«ã¯ ` init ãpy `ãã¡ã¤ã«ï¼ç©ºã®å ´åãããã¾ãï¼ãå«ã¾ãã¦ããå¿
è¦ãããã¾ãã ãã®å¾ã次ã®ããã« ` alice_in_wonderland.txt +`ãã¡ã¤ã«ãèªããã¨ãã§ãã¾ãï¼
>>>
>>> from importlib import resources
>>> with resources.open_text("data", "alice_in_wonderland.txt") as fid:
... alice = fid.readlines()
...
>>> print("".join(alice[:7]))
CHAPTER I. Down the Rabbit-Hole
Alice was beginning to get very tired of sitting by her sister on the
bank, and of having nothing to do: once or twice she had peeped into the
book her sister was reading, but it had no pictures or conversations in
it, "and what is the use of a book," thought Alice "without pictures or
conversations?"
åæ§ã®https://docs.python.org/3.7/library/importlib.html#importlib.resources.open_binary [+ resources.open_binaryï¼ï¼+]颿°ã¯ããã¤ããªã¢ã¼ãã§ãã¡ã¤ã«ãéãããã«ä½¿ç¨ã§ãã¾ãã åã®ãªã³ã¯ï¼ï¼customization-of-module-attributes [ãã¢ã¸ã¥ã¼ã«å±æ§ã¨ãã¦ã®ãã©ã°ã¤ã³ãã®ä¾]ã§ã¯ã `+ importlib.resources `ã使ç¨ãã¦ã ` resources.contentsï¼ï¼+`ã使ç¨ãã¦ä½¿ç¨å¯è½ãªãã©ã°ã¤ã³ãæ¤åºãã¾ããã 詳細ã«ã¤ãã¦ã¯ãhttps://www.youtube.com/watchï¼v = ZsGFU2qh73E [Barry Warsawã®PyCon 2018ãã¼ã¯]ãã覧ãã ããã
Python 2.7ããã³Python 3.4+ã§ã¯ãhttps://pypi.org/project/importlib_resources/[backport]ãä»ã㦠+ importlib.resources +`ã使ç¨ã§ãã¾ãã http://importlib-resources.readthedocs.io/en/latest/migration.html [+ pkg_resources `ãã ` importlib.resources +`ã¸ã®ç§»è¡ã¬ã¤ã]ãå©ç¨å¯è½ã§ãã
éçºè ã®ç§
Python 3.7ã«ã¯ãéçºè
ã¨ãã¦ã®ã¦ã¼ã¶ã¼åãã®æ©è½ãããã¤ã追å ããã¦ãã¾ãã linkï¼ï¼the-breakpoint-built-in [ãã§ã«æ°ãã + breakpointï¼ï¼+`ãã«ãã¤ã³ã表示ããã¦ãã¾ã] ããã«ãããã¤ãã®æ°ããhttps://docs.python.org/3.7/using/cmdline.html#id5 [+ -X +`ã³ãã³ãã©ã¤ã³ãªãã·ã§ã³]ãPythonã¤ã³ã¿ã¼ããªã¿ã¼ã«è¿½å ããã¾ããã
`+ -X importtime +`ã使ç¨ããã¨ãã¹ã¯ãªããã®ã¤ã³ãã¼ãã«ãããæéãç°¡åã«ææ¡ã§ãã¾ãã
$ python3.7 -X importtime my_script.py
import time: self [us] | cumulative | imported package
import time: 2607 | 2607 | _frozen_importlib_external
...
import time: 844 | 28866 | importlib.resources
import time: 404 | 30434 | plugins
`+ cumulative `ã«ã©ã ã«ã¯ãã¤ã³ãã¼ãã®ç´¯ç©æéãï¼ãã¤ã¯ãç§åä½ã§ï¼è¡¨ç¤ºããã¾ãã ãã®ä¾ã§ã¯ãã plugins ãã®ã¤ã³ãã¼ãã«ã¯ç´0.03ç§ãããããã®ã»ã¨ãã©ã¯ã importlib.resources ãã®ã¤ã³ãã¼ãã«è²»ãããã¾ããã ã self +ãåã¯ããã¹ããããã¤ã³ãã¼ããé¤ãã¤ã³ãã¼ãæéã示ãã¾ãã
ããã§ãã-X dev +ãã使ç¨ãã¦ãéçºã¢ã¼ãããæå¹ã«ã§ãã¾ããéçºã¢ã¼ãã§ã¯ãããã©ã«ãã§ã¯æå¹ã«ããã«ã¯é ãããã¨è¦ãªãããç¹å®ã®ãããã°æ©è½ã¨ã©ã³ã¿ã¤ã ãã§ãã¯ã追å ããã¾ãã ããã«ã¯ãhttps://docs.python.org/library/faulthandler.html#module-faulthandler [` faulthandler +`]ãæå¹ã«ãã¦ãé大ãªã¯ã©ãã·ã¥ã®ãã¬ã¼ã¹ããã¯ã¨ãããå¤ãã®è¦åã¨ãããã°ããã¯ã表示ãããã¨ãå«ã¾ãã¾ãã
æå¾ã«ã `+ -X utf8 `ã¯https://docs.python.org/3.7/using/cmdline.html#envvar-PYTHONUTF8[UTF-8ã¢ã¼ã]ãæå¹ã«ãã¾ãã ï¼https://www.python.org/dev/peps/pep-0540/[PEP 540]ãåç §ãã¦ãã ãããï¼ãã®ã¢ã¼ãã§ã¯ãç¾å¨ã®ãã±ã¼ã«ã«é¢ä¿ãªããããã¹ãã¨ã³ã³ã¼ãã£ã³ã°ã« ` UTF-8 +`ã使ç¨ããã¾ãã
æé©å
Pythonã®æ°ãããªãªã¼ã¹ã«ã¯ãããããæé©åã®ã»ãããä»å±ãã¦ãã¾ãã Python 3.7ã«ã¯ã次ã®ãããªå¤§å¹ ãªé«éåãããã¾ãã
æ¨æºã©ã¤ãã©ãªã®å¤ãã®ã¡ã½ãããå¼ã³åºãéã®ãªã¼ãã¼ããããå°ãªããªãã¾ãã
ã¡ã½ããå¼ã³åºãã¯ä¸è¬ã«æå¤§20ï¼ é«éã§ãã
Pythonèªä½ã®èµ·åæéã¯10ã30ï¼ ç縮ããã¾ãã
`+ typing +`ã®ã¤ã³ãã¼ãã¯7åé«éã§ãã
ããã«ãããå¤ãã®ç¹æ®ãªæé©åãå«ã¾ãã¦ãã¾ãã è©³ç´°ãªæ¦è¦ã«ã¤ãã¦ã¯ãhttps://docs.python.org/3.7/whatsnew/3.7.html#optimizations [ãã®ãªã¹ã]ãåç §ãã¦ãã ããã
ããããã¹ã¦ã®æé©åã®çµæã¯ãhttps://speed.python.org/[Python 3.7ã¯é«é]ã§ãã ããã¯ãããã¾ã§ã«ãªãªã¼ã¹ãããhttps://hackernoon.com/which-is-the-fastest-version-of-python-2ae7c61a6b2b[CPythonã®æéãã¼ã¸ã§ã³]ã§ãã
ã ãããã¢ããã°ã¬ã¼ãããå¿ è¦ãããã¾ããï¼
ç°¡åãªçãããå§ãã¾ãããã ããã§è¦ãæ°æ©è½ã試ãã¦ã¿ããå ´åã¯ãPython 3.7ã使ç¨ã§ããå¿
è¦ãããã¾ãã https://github.com/pyenv/pyenv [+ pyenv +]ãhttps://www.anaconda.com/download/[Anaconda]ãªã©ã®ãã¼ã«ã使ç¨ããã¨ãPythonã®ããã¤ãã®ãã¼ã¸ã§ã³ãç°¡åã«ä¸¦ã¹ã¦ã¤ã³ã¹ãã¼ã«ã§ãã¾ãã Python 3.7ãã¤ã³ã¹ãã¼ã«ãã¦è©¦ãã¦ã¿ã¦ããã¤ãã¹é¢ã¯ããã¾ããã
ãã¦ãããè¤éãªè³ªåã«ã å®ç¨¼åç°å¢ãPython 3.7ã«ã¢ããã°ã¬ã¼ãããå¿ è¦ãããã¾ããï¼ æ°ããæ©è½ãå©ç¨ããã«ã¯ãPython 3.7ã«ä¾åããç¬èªã®ããã¸ã§ã¯ãã使ããå¿ è¦ãããã¾ããï¼
å®ç¨¼åç°å¢ãã¢ããã°ã¬ã¼ãããåã«å¸¸ã«å¾¹åºçãªãã¹ããè¡ãå¿ è¦ããããã¨ã¯æãããªè¦åã§ãããããPython 3.7ã«ã¯ä»¥åã®ã³ã¼ããå£ããã®ã¯ã»ã¨ãã©ããã¾ããï¼ `+ async `ããã³ ` await +`ããã¼ã¯ã¼ãã«ãªãã®ã¯ä¸ä¾ã§ãï¼ã ãã§ã«ææ°ã®Pythonã使ç¨ãã¦ããå ´åã3.7ã¸ã®ã¢ããã°ã¬ã¼ãã¯é常ã«ã¹ã ã¼ãºã«è¡ãããã¯ãã§ãã å°ãä¿å®çã«ãªãããå ´åã¯ãæåã®ã¡ã³ããã³ã¹ãªãªã¼ã¹ï¼Python 3.7.1ï¼ã®ãªãªã¼ã¹ï¼https://www.python.org/dev/peps/pep-0537/#maintenance-releasesï¼ãå¾ ã¤ãã¨ããå§ããã¾ãã [2018å¹´7æã«æ«å®çã«äºæ³ããã]ã
ããã¸ã§ã¯ãã3.7ã®ã¿ã«ããã¹ãã ã¨ä¸»å¼µããã®ã¯é£ããã§ãã Python 3.7ã®æ°æ©è½ã®å¤ãã¯ãPython 3.6ã¸ã®ããã¯ãã¼ãï¼ãã¼ã¿ã¯ã©ã¹ã + importlib.resources +ï¼ã¾ãã¯ä¾¿å©ãï¼èµ·åã¨ã¡ã½ããå¼ã³åºãã®é«éåããããã°ã®å®¹æåãããã³ `+ -X +`ãªãã·ã§ã³ï¼ã¨ãã¦å©ç¨ã§ãã¾ãã å¾è
ã§ã¯ãPython 3.6ï¼ã¾ãã¯ãã以ä¸ï¼ã¨ã®äºææ§ãä¿ã¡ãªãããPython 3.7ãèªåã§å®è¡ãããã¨ã§å©ç¨ã§ãã¾ãã
ã³ã¼ããPython 3.7ã«ããã¯ãã大ããªæ©è½ã¯ãlinkï¼ï¼customization-of-module-attributes [+ on modules]ãlinkï¼ï¼typing-enhancements [ã¿ã¤ããã³ãã®åæ¹åç
§]ãããã³linkï¼ï¼timing-precision [ããç§ã® `+ time +`颿°]ã ãããã®ãããããæ¬å½ã«å¿
è¦ãªå ´åã¯ãå
ã«é²ãã§è¦ä»¶ãæ¹åããå¿
è¦ãããã¾ãã ããã§ãªããã°ãããªãã®ããã¸ã§ã¯ãã¯ãPython 3.6ã§ãã°ããå®è¡ãããã¨ãã§ããã°ãããããä»ã®äººã«ã¨ã£ã¦ããå½¹ã«ç«ã¤ã§ãããã getattr ï¼ï¼+
ã¢ããã°ã¬ã¼ãæã«æ³¨æãã¹ã詳細ã«ã¤ãã¦ã¯ãhttps://docs.python.org/3.7/whatsnew/3.7.html#porting-to-python-37 [Porting to Python 3.7 guide]ãåç §ãã¦ãã ããã |
ExtUtils::MakeMaker - Create a module Makefile
use ExtUtils::MakeMaker;
WriteMakefile( ATTRIBUTE => VALUE [, ...] );
This utility is designed to write a Makefile for an extension module from a Makefile.PL. It is based on the Makefile.SH model provided by Andy Dougherty and the perl5-porters.
It splits the task of generating the Makefile into several subroutines that can be individually overridden. Each subroutine returns the text it wishes to have written to the Makefile.
MakeMaker is object oriented. Each directory below the current directory that contains a Makefile.PL is treated as a separate object. This makes it possible to write an unlimited number of Makefiles with a single invocation of WriteMakefile().
See ExtUtils::MakeMaker::Tutorial.
The long answer is the rest of the manpage :-)
The generated Makefile enables the user of the extension to invoke
perl Makefile.PL # optionally "perl Makefile.PL verbose"
make
make test # optionally set TEST_VERBOSE=1
make install # See below
The Makefile to be produced may be altered by adding arguments of the form KEY=VALUE. E.g.
perl Makefile.PL PREFIX=/tmp/myperl5
Other interesting targets in the generated Makefile are
make config # to check if the Makefile is up-to-date
make clean # delete local temp files (Makefile gets renamed)
make realclean # delete derived files (including ./blib)
make ci # check in all the files in the MANIFEST file
make dist # see below the Distribution Support section
MakeMaker checks for the existence of a file named test.pl in the current directory and if it exists it execute the script with the proper set of perl -I options.
MakeMaker also checks for any files matching glob("t/*.t"). It will execute all matching files in alphabetical order via the Test::Harness module with the -I switches set correctly.
If you'd like to see the raw output of your tests, set the TEST_VERBOSE variable to true.
make test TEST_VERBOSE=1
A useful variation of the above is the target testdb. It runs the test under the Perl debugger (see perldebug). If the file test.pl exists in the current directory, it is used for the test.
If you want to debug some other testfile, set the TEST_FILE variable thusly:
make testdb TEST_FILE=t/mytest.t
By default the debugger is called using -d option to perl. If you want to specify some other option, set the TESTDB_SW variable:
make testdb TESTDB_SW=-Dx
make alone puts all relevant files into directories that are named by the macros INST_LIB, INST_ARCHLIB, INST_SCRIPT, INST_MAN1DIR and INST_MAN3DIR. All these default to something below ./blib if you are not building below the perl source directory. If you are building below the perl source, INST_LIB and INST_ARCHLIB default to ../../lib, and INST_SCRIPT is not defined.
The install target of the generated Makefile copies the files found below each of the INST_* directories to their INSTALL* counterparts. Which counterparts are chosen depends on the setting of INSTALLDIRS according to the following table:
INSTALLDIRS set to perl site vendor PERLPREFIX SITEPREFIX VENDORPREFIXINST_ARCHLIB INSTALLARCHLIB INSTALLSITEARCH INSTALLVENDORARCHINST_LIB INSTALLPRIVLIB INSTALLSITELIB INSTALLVENDORLIBINST_BIN INSTALLBIN INSTALLSITEBIN INSTALLVENDORBININST_SCRIPT INSTALLSCRIPT INSTALLSCRIPT INSTALLSCRIPTINST_MAN1DIR INSTALLMAN1DIR INSTALLSITEMAN1DIR INSTALLVENDORMAN1DIRINST_MAN3DIR INSTALLMAN3DIR INSTALLSITEMAN3DIR INSTALLVENDORMAN3DIR
The INSTALL... macros in turn default to their %Config ($Config{installprivlib}, $Config{installarchlib}, etc.) counterparts.
You can check the values of these variables on your system with
perl '-V:install.*'
And to check the sequence in which the library directories are searched by perl, run
perl -le 'print join $/, @INC'
PREFIX and LIB can be used to set several INSTALL* attributes in one go. The quickest way to install a module in a non-standard place might be
perl Makefile.PL PREFIX=~
This will install all files in the module under your home directory, with man pages and libraries going into an appropriate place (usually ~/man and ~/lib).
Another way to specify many INSTALL directories with a single parameter is LIB.
perl Makefile.PL LIB=~/lib
This will install the module's architecture-independent files into ~/lib, the architecture-dependent files into ~/lib/$archname.
Note, that in both cases the tilde expansion is done by MakeMaker, not by perl by default, nor by make.
Conflicts between parameters LIB, PREFIX and the various INSTALL* arguments are resolved so that:
setting LIB overrides any setting of INSTALLPRIVLIB, INSTALLARCHLIB, INSTALLSITELIB, INSTALLSITEARCH (and they are not affected by PREFIX);
without LIB, setting PREFIX replaces the initial $Config{prefix} part of those INSTALL* arguments, even if the latter are explicitly set (but are set to still start with $Config{prefix}).
If the user has superuser privileges, and is not working on AFS or relatives, then the defaults for INSTALLPRIVLIB, INSTALLARCHLIB, INSTALLSCRIPT, etc. will be appropriate, and this incantation will be the best:
perl Makefile.PL;
make;
make test
make install
make install per default writes some documentation of what has been done into the file $(INSTALLARCHLIB)/perllocal.pod. This feature can be bypassed by calling make pure_install.
will have to specify the installation directories as these most probably have changed since perl itself has been installed. They will have to do this by calling
perl Makefile.PL INSTALLSITELIB=/afs/here/today \
INSTALLSCRIPT=/afs/there/now INSTALLMAN3DIR=/afs/for/manpages
make
Be careful to repeat this procedure every time you recompile an extension, unless you are sure the AFS installation directories are still valid.
An extension that is built with the above steps is ready to use on systems supporting dynamic loading. On systems that do not support dynamic loading, any newly created extension has to be linked together with the available resources. MakeMaker supports the linking process by creating appropriate targets in the Makefile whenever an extension is built. You can invoke the corresponding section of the makefile with
make perl
That produces a new perl binary in the current directory with all extensions linked in that can be found in INST_ARCHLIB, SITELIBEXP, and PERL_ARCHLIB. To do that, MakeMaker writes a new Makefile, on UNIX, this is called Makefile.aperl (may be system dependent). If you want to force the creation of a new perl, it is recommended, that you delete this Makefile.aperl, so the directories are searched-through for linkable libraries again.
The binary can be installed into the directory where perl normally resides on your machine with
make inst_perl
To produce a perl binary with a different name than perl, either say
perl Makefile.PL MAP_TARGET=myperl
make myperl
make inst_perl
or say
perl Makefile.PL
make myperl MAP_TARGET=myperl
make inst_perl MAP_TARGET=myperl
In any case you will be prompted with the correct invocation of the inst_perl target that installs the new binary into INSTALLBIN.
make inst_perl per default writes some documentation of what has been done into the file $(INSTALLARCHLIB)/perllocal.pod. This can be bypassed by calling make pure_inst_perl.
Warning: the inst_perl: target will most probably overwrite your existing perl binary. Use with care!
Sometimes you might want to build a statically linked perl although your system supports dynamic loading. In this case you may explicitly set the linktype with the invocation of the Makefile.PL or make:
perl Makefile.PL LINKTYPE=static # recommended
or
make LINKTYPE=static # works on most systems
MakeMaker needs to know, or to guess, where certain things are located. Especially INST_LIB and INST_ARCHLIB (where to put the files during the make(1) run), PERL_LIB and PERL_ARCHLIB (where to read existing modules from), and PERL_INC (header files and libperl*.*).
Extensions may be built either using the contents of the perl source directory tree or from the installed perl library. The recommended way is to build extensions after you have run 'make install' on perl itself. You can do that in any directory on your hard disk that is not below the perl source tree. The support for extensions below the ext directory of the perl distribution is only good for the standard extensions that come with perl.
If an extension is being built below the ext/ directory of the perl source then MakeMaker will set PERL_SRC automatically (e.g., ../..). If PERL_SRC is defined and the extension is recognized as a standard extension, then other variables default to the following:
PERL_INC = PERL_SRC
PERL_LIB = PERL_SRC/lib
PERL_ARCHLIB = PERL_SRC/lib
INST_LIB = PERL_LIB
INST_ARCHLIB = PERL_ARCHLIB
If an extension is being built away from the perl source then MakeMaker will leave PERL_SRC undefined and default to using the installed copy of the perl library. The other variables default to the following:
PERL_INC = $archlibexp/CORE
PERL_LIB = $privlibexp
PERL_ARCHLIB = $archlibexp
INST_LIB = ./blib/lib
INST_ARCHLIB = ./blib/arch
If perl has not yet been installed then PERL_SRC can be defined on the command line as shown in the previous section.
If you don't want to keep the defaults for the INSTALL* macros, MakeMaker helps you to minimize the typing needed: the usual relationship between INSTALLPRIVLIB and INSTALLARCHLIB is determined by Configure at perl compilation time. MakeMaker supports the user who sets INSTALLPRIVLIB. If INSTALLPRIVLIB is set, but INSTALLARCHLIB not, then MakeMaker defaults the latter to be the same subdirectory of INSTALLPRIVLIB as Configure decided for the counterparts in %Config , otherwise it defaults to INSTALLPRIVLIB. The same relationship holds for INSTALLSITELIB and INSTALLSITEARCH.
MakeMaker gives you much more freedom than needed to configure internal variables and get different results. It is worth to mention, that make(1) also lets you configure most of the variables that are used in the Makefile. But in the majority of situations this will not be necessary, and should only be done if the author of a package recommends it (or you know what you're doing).
The following attributes may be specified as arguments to WriteMakefile() or as NAME=VALUE pairs on the command line.
One line description of the module. Will be included in PPD file.
Name of the file that contains the package description. MakeMaker looks for a line in the POD matching /^($package\s-\s)(.*)/. This is typically the first line in the "=head1 NAME" section. $2 becomes the abstract.
String containing name (and email address) of package author(s). Is used in PPD (Perl Package Description) files for PPM (Perl Package Manager).
Used when creating PPD files for binary packages. It can be set to a full or relative path or URL to the binary archive for a particular architecture. For example:
perl Makefile.PL BINARY_LOCATION=x86/Agent.tar.gz
builds a PPD package that references a binary of the Agent package, located in the x86 directory relative to the PPD itself.
Ref to array of *.c file names. Initialised from a directory scan and the values portion of the XS attribute hash. This is not currently used by MakeMaker but may be handy in Makefile.PLs.
String that will be included in the compiler call command line between the arguments INC and OPTIMIZE.
Arrayref. E.g. [qw(archname manext)] defines ARCHNAME & MANEXT from config.sh. MakeMaker will add to CONFIG the following values anyway: ar cc cccdlflags ccdlflags dlext dlsrc ld lddlflags ldflags libc lib_ext obj_ext ranlib sitelibexp sitearchexp so
CODE reference. The subroutine should return a hash reference. The hash may contain further attributes, e.g. {LIBS => ...}, that have to be determined by some evaluation method.
Something like "-DHAVE_UNISTD_H"
This is the root directory into which the code will be installed. It prepends itself to the normal prefix. For example, if your code would normally go into /usr/local/lib/perl you could set DESTDIR=/tmp/ and installation would go into /tmp/usr/local/lib/perl.
This is primarily of use for people who repackage Perl modules.
NOTE: Due to the nature of make, it is important that you put the trailing slash on your DESTDIR. "/tmp/" not "/tmp".
Ref to array of subdirectories containing Makefile.PLs e.g. [ 'sdbm' ] in ext/SDBM_File
A safe filename for the package.
Defaults to NAME above but with :: replaced with -.
For example, Foo::Bar becomes Foo-Bar.
Your name for distributing the package with the version number included. This is used by 'make dist' to name the resulting archive file.
Defaults to DISTNAME-VERSION.
For example, version 1.04 of Foo::Bar becomes Foo-Bar-1.04.
On some OS's where . has special meaning VERSION_SYM may be used in place of VERSION.
Hashref of symbol names for routines to be made available as universal symbols. Each key/value pair consists of the package name and an array of routine names in that package. Used only under AIX, OS/2, VMS and Win32 at present. The routine names supplied will be expanded in the same way as XSUB names are expanded by the XS() macro. Defaults to
{"$(NAME)" => ["boot_$(NAME)" ] }
e.g.
{"RPC" => [qw( boot_rpcb rpcb_gettime getnetconfigent )],
"NetconfigPtr" => [ 'DESTROY'] }
Please see the ExtUtils::Mksymlists documentation for more information about the DL_FUNCS, DL_VARS and FUNCLIST attributes.
Array of symbol names for variables to be made available as universal symbols. Used only under AIX, OS/2, VMS and Win32 at present. Defaults to []. (e.g. [ qw(Foo_version Foo_numstreams Foo_tree ) ])
Array of extension names to exclude when doing a static build. This is ignored if INCLUDE_EXT is present. Consult INCLUDE_EXT for more details. (e.g. [ qw( Socket POSIX ) ] )
This attribute may be most useful when specified as a string on the command line: perl Makefile.PL EXCLUDE_EXT='Socket Safe'
Ref to array of executable files. The files will be copied to the INST_SCRIPT directory. Make realclean will delete them from there again.
If your executables start with something like #!perl or #!/usr/bin/perl MakeMaker will change this to the path of the perl 'Makefile.PL' was invoked with so the programs will be sure to run properly even if perl is not in /usr/bin/perl.
The name of the Makefile to be produced. This is used for the second Makefile that will be produced for the MAP_TARGET.
Defaults to 'Makefile' or 'Descrip.MMS' on VMS.
(Note: we couldn't use MAKEFILE because dmake uses this for something else).
Perl binary able to run this extension, load XS modules, etc...
Like PERLRUN, except it uses FULLPERL.
Like PERLRUNINST, except it uses FULLPERL.
This provides an alternate means to specify function names to be exported from the extension. Its value is a reference to an array of function names to be exported by the extension. These names are passed through unaltered to the linker options file.
Ref to array of *.h file names. Similar to C.
This attribute is used to specify names to be imported into the extension. Takes a hash ref.
It is only used on OS/2 and Win32.
Include file dirs eg: "-I/usr/5include -I/path/to/inc"
Array of extension names to be included when doing a static build. MakeMaker will normally build with all of the installed extensions when doing a static build, and that is usually the desired behavior. If INCLUDE_EXT is present then MakeMaker will build only with those extensions which are explicitly mentioned. (e.g. [ qw( Socket POSIX ) ])
It is not necessary to mention DynaLoader or the current extension when filling in INCLUDE_EXT. If the INCLUDE_EXT is mentioned but is empty then only DynaLoader and the current extension will be included in the build.
This attribute may be most useful when specified as a string on the command line: perl Makefile.PL INCLUDE_EXT='POSIX Socket Devel::Peek'
Used by 'make install', which copies files from INST_ARCHLIB to this directory if INSTALLDIRS is set to perl.
Directory to install binary files (e.g. tkperl) into if INSTALLDIRS=perl.
Determines which of the sets of installation directories to choose: perl, site or vendor. Defaults to site.
These directories get the man pages at 'make install' time if INSTALLDIRS=perl. Defaults to $Config{installman*dir}.
If set to 'none', no man pages will be installed.
Used by 'make install', which copies files from INST_LIB to this directory if INSTALLDIRS is set to perl.
Defaults to $Config{installprivlib}.
Used by 'make install' which copies files from INST_SCRIPT to this directory.
Used by 'make install', which copies files from INST_ARCHLIB to this directory if INSTALLDIRS is set to site (default).
Used by 'make install', which copies files from INST_BIN to this directory if INSTALLDIRS is set to site (default).
Used by 'make install', which copies files from INST_LIB to this directory if INSTALLDIRS is set to site (default).
These directories get the man pages at 'make install' time if INSTALLDIRS=site (default). Defaults to $(SITEPREFIX)/man/man$(MAN*EXT).
If set to 'none', no man pages will be installed.
Used by 'make install', which copies files from INST_ARCHLIB to this directory if INSTALLDIRS is set to vendor.
Used by 'make install', which copies files from INST_BIN to this directory if INSTALLDIRS is set to vendor.
Used by 'make install', which copies files from INST_LIB to this directory if INSTALLDIRS is set to vendor.
These directories get the man pages at 'make install' time if INSTALLDIRS=vendor. Defaults to $(VENDORPREFIX)/man/man$(MAN*EXT).
If set to 'none', no man pages will be installed.
Same as INST_LIB for architecture dependent files.
Directory to put real binary files during 'make'. These will be copied to INSTALLBIN during 'make install'
Directory where we put library files of this extension while building it.
Directory to hold the man pages at 'make' time
Directory to hold the man pages at 'make' time
Directory, where executable files should be installed during 'make'. Defaults to "./blib/script", just to have a dummy location during testing. make install will copy the files in INST_SCRIPT to INSTALLSCRIPT.
Program to be used to link libraries for dynamic loading.
Defaults to $Config{ld}.
Any special flags that might need to be passed to ld to create a shared library suitable for dynamic loading. It is up to the makefile to use it. (See "lddlflags" in Config)
Defaults to $Config{lddlflags}.
Defaults to "$(OBJECT)" and is used in the ld command to specify what files to link/load from (also see dynamic_lib below for how to specify ld flags)
LIB should only be set at perl Makefile.PL time but is allowed as a MakeMaker argument. It has the effect of setting both INSTALLPRIVLIB and INSTALLSITELIB to that value regardless any explicit setting of those arguments (or of PREFIX). INSTALLARCHLIB and INSTALLSITEARCH are set to the corresponding architecture subdirectory.
The filename of the perllibrary that will be used together with this extension. Defaults to libperl.a.
An anonymous array of alternative library specifications to be searched for (in order) until at least one library is found. E.g.
'LIBS' => ["-lgdbm", "-ldbm -lfoo", "-L/path -ldbm.nfs"]
Mind, that any element of the array contains a complete set of arguments for the ld command. So do not specify
'LIBS' => ["-ltcl", "-ltk", "-lX11"]
See ODBM_File/Makefile.PL for an example, where an array is needed. If you specify a scalar as in
'LIBS' => "-ltcl -ltk -lX11"
MakeMaker will turn it into an array with one element.
'static' or 'dynamic' (default unless usedl=undef in config.sh). Should only be used to force static linking (also see linkext below).
Boolean which tells MakeMaker, that it should include the rules to make a perl. This is handled automatically as a switch by MakeMaker. The user normally does not need it.
When 'make clean' or similar is run, the $(FIRST_MAKEFILE) will be backed up at this location.
Defaults to $(FIRST_MAKEFILE).old or $(FIRST_MAKEFILE)_old on VMS.
Hashref of pod-containing files. MakeMaker will default this to all EXE_FILES files that include POD directives. The files listed here will be converted to man pages and installed as was requested at Configure time.
Hashref that assigns to *.pm and *.pod files the files into which the manpages are to be written. MakeMaker parses all *.pod and *.pm files for POD directives. Files that contain POD will be the default keys of the MAN3PODS hashref. These will then be converted to man pages during make and will be installed during make install.
If it is intended, that a new perl binary be produced, this variable may hold a name for that binary. Defaults to perl
If the extension links to a library that it builds set this to the name of the library (see SDBM_File)
Perl module name for this extension (DBD::Oracle). This will default to the directory name but should be explicitly defined in the Makefile.PL.
MakeMaker will figure out if an extension contains linkable code anywhere down the directory tree, and will set this variable accordingly, but you can speed it up a very little bit if you define this boolean variable yourself.
Command so make does not print the literal commands its running.
By setting it to an empty string you can generate a Makefile that prints all commands. Mainly used in debugging MakeMaker itself.
Defaults to @.
Boolean. Attribute to inhibit descending into subdirectories.
When true, suppresses the generation and addition to the MANIFEST of the META.yml module meta-data file during 'make distdir'.
Defaults to false.
In general, any generated Makefile checks for the current version of MakeMaker and the version the Makefile was built under. If NO_VC is set, the version check is neglected. Do not write this into your Makefile.PL, use it interactively instead.
List of object files, defaults to '$(BASEEXT)$(OBJ_EXT)', but can be a long string containing all object files, e.g. "tkpBind.o tkpButton.o tkpCanvas.o"
(Where BASEEXT is the last component of NAME, and OBJ_EXT is $Config{obj_ext}.)
Defaults to -O. Set it to -g to turn debugging on. The flag is passed to subdirectory makes.
Perl binary for tasks that can be done by miniperl
Set only when MakeMaker is building the extensions of the Perl core distribution.
The call to the program that is able to compile perlmain.c. Defaults to $(CC).
Same as for PERL_LIB, but for architecture dependent files.
Used only when MakeMaker is building the extensions of the Perl core distribution (because normally $(PERL_ARCHLIB) is automatically in @INC, and adding it would get in the way of PERL5LIB).
Directory containing the Perl library to use.
Used only when MakeMaker is building the extensions of the Perl core distribution (because normally $(PERL_LIB) is automatically in @INC, and adding it would get in the way of PERL5LIB).
defaults to 0. Should be set to TRUE if the extension can work with the memory allocation routines substituted by the Perl malloc() subsystem. This should be applicable to most extensions with exceptions of those
with bugs in memory allocations which are caught by Perl's malloc();
which interact with the memory allocator in other ways than via malloc(), realloc(), free(), calloc(), sbrk() and brk();
which rely on special alignment which is not provided by Perl's malloc().
NOTE. Negligence to set this flag in any one of loaded extension nullifies many advantages of Perl's malloc(), such as better usage of system resources, error detection, memory usage reporting, catchable failure of memory allocations, etc.
Directory under which core modules are to be installed.
Defaults to $Config{installprefixexp} falling back to $Config{installprefix}, $Config{prefixexp} or $Config{prefix} should $Config{installprefixexp} not exist.
Overridden by PREFIX.
Use this instead of $(PERL) when you wish to run perl. It will set up extra necessary flags for you.
Use this instead of $(PERL) when you wish to run perl to work with modules. It will add things like -I$(INST_ARCH) and other necessary flags so perl can see the modules you're about to install.
Directory containing the Perl source code (use of this should be avoided, it may be undefined)
Desired permission for read/writable files. Defaults to 644. See also "perm_rw" in MM_Unix.
Desired permission for executable files. Defaults to 755. See also "perm_rwx" in MM_Unix.
Ref to hash of files to be processed as perl programs. MakeMaker will default to any found *.PL file (except Makefile.PL) being keys and the basename of the file being the value. E.g.
{'foobar.PL' => 'foobar'}
The *.PL files are expected to produce output to the target files themselves. If multiple files can be generated from the same *.PL file then the value in the hash can be a reference to an array of target file names. E.g.
{'foobar.PL' => ['foobar1','foobar2']}
Hashref of .pm files and *.pl files to be installed. e.g.
{'name_of_file.pm' => '$(INST_LIBDIR)/install_as.pm'}
By default this will include *.pm and *.pl and the files found in the PMLIBDIRS directories. Defining PM in the Makefile.PL will override PMLIBDIRS.
Ref to array of subdirectories containing library files. Defaults to [ 'lib', $(BASEEXT) ]. The directories will be scanned and any files they contain will be installed in the corresponding location in the library. A libscan() method can be used to alter the behaviour. Defining PM in the Makefile.PL will override PMLIBDIRS.
(Where BASEEXT is the last component of NAME.)
A filter program, in the traditional Unix sense (input from stdin, output to stdout) that is passed on each .pm file during the build (in the pm_to_blib() phase). It is empty by default, meaning no filtering is done.
Great care is necessary when defining the command if quoting needs to be done. For instance, you would need to say:
{'PM_FILTER' => 'grep -v \\"^\\#\\"'}
to remove all the leading coments on the fly during the build. The extra \\ are necessary, unfortunately, because this variable is interpolated within the context of a Perl program built on the command line, and double quotes are what is used with the -e switch to build that command line. The # is escaped for the Makefile, since what is going to be generated will then be:
PM_FILTER = grep -v \"^\#\"
Without the \\ before the #, we'd have the start of a Makefile comment, and the macro would be incorrectly defined.
Release 5.005 grandfathered old global symbol names by providing preprocessor macros for extension source compatibility. As of release 5.6, these preprocessor definitions are not available by default. The POLLUTE flag specifies that the old names should still be defined:
perl Makefile.PL POLLUTE=1
Please inform the module author if this is necessary to successfully install a module under 5.6 or later.
Name of the executable used to run PPM_INSTALL_SCRIPT below. (e.g. perl)
Name of the script that gets executed by the Perl Package Manager after the installation of a package.
This overrides all the default install locations. Man pages, libraries, scripts, etc... MakeMaker will try to make an educated guess about where to place things under the new PREFIX based on your Config defaults. Failing that, it will fall back to a structure which should be sensible for your platform.
If you specify LIB or any INSTALL* variables they will not be effected by the PREFIX.
Bool. If this parameter is true, failing to have the required modules (or the right versions thereof) will be fatal. perl Makefile.PL will die with the proper message.
Note: see Test::Harness for a shortcut for stopping tests early if you are missing dependencies.
Do not use this parameter for simple requirements, which could be resolved at a later time, e.g. after an unsuccessful make test of your module.
It is extremely rare to have to use PREREQ_FATAL at all!
Hashref: Names of modules that need to be available to run this extension (e.g. Fcntl for SDBM_File) are the keys of the hash and the desired version is the value. If the required version number is 0, we only check if any version is installed already.
Bool. If this parameter is true, the prerequisites will be printed to stdout and MakeMaker will exit. The output format is an evalable hash ref.
$PREREQ_PM = { 'A::B' => Vers1, 'C::D' => Vers2, ... };
RedHatism for PREREQ_PRINT. The output format is different, though:
perl(A::B)>=Vers1 perl(C::D)>=Vers2 ...
Like PERLPREFIX, but only for the site install locations.
Defaults to $Config{siteprefixexp}. Perls prior to 5.6.0 didn't have an explicit siteprefix in the Config. In those cases $Config{installprefix} will be used.
Overridable by PREFIX
Arrayref. E.g. [qw(name1 name2)] skip (do not write) sections of the Makefile. Caution! Do not use the SKIP attribute for the negligible speedup. It may seriously damage the resulting Makefile. Only use it if you really need it.
Ref to array of typemap file names. Use this when the typemaps are in some directory other than the current directory or when they are not named typemap. The last typemap in the list takes precedence. A typemap in the current directory has highest precedence, even if it isn't listed in TYPEMAPS. The default system typemap has lowest precedence.
Like PERLPREFIX, but only for the vendor install locations.
Defaults to $Config{vendorprefixexp}.
Overridable by PREFIX
If true, make install will be verbose
Your version number for distributing the package. This defaults to 0.1.
Instead of specifying the VERSION in the Makefile.PL you can let MakeMaker parse a file to determine the version number. The parsing routine requires that the file named by VERSION_FROM contains one single line to compute the version number. The first line in the file that contains the regular expression
/([\$*])(([\w\:\']*)\bVERSION)\b.*\=/
will be evaluated with eval() and the value of the named variable after the eval() will be assigned to the VERSION attribute of the MakeMaker object. The following lines will be parsed o.k.:
$VERSION = '1.00';
*VERSION = \'1.01';
$VERSION = sprintf "%d.%03d", q$Revision: 1.133 $ =~ /(\d+)/g;
$FOO::VERSION = '1.10';
*FOO::VERSION = \'1.11';
our $VERSION = 1.2.3; # new for perl5.6.0
but these will fail:
my $VERSION = '1.01';
local $VERSION = '1.02';
local $FOO::VERSION = '1.30';
(Putting my or local on the preceding line will work o.k.)
The file named in VERSION_FROM is not added as a dependency to Makefile. This is not really correct, but it would be a major pain during development to have to rewrite the Makefile for any smallish change in that file. If you want to make sure that the Makefile contains the correct VERSION macro after any change of the file, you would have to do something like
depend => { Makefile => '$(VERSION_FROM)' }
See attribute depend below.
A sanitized VERSION with . replaced by _. For places where . has special meaning (some filesystems, RCS labels, etc...)
Hashref of .xs files. MakeMaker will default this. e.g.
{'name_of_file.xs' => 'name_of_file.c'}
The .c files will automatically be included in the list of files deleted by a make clean.
String of options to pass to xsubpp. This might include -C++ or -extern. Do not include typemaps here; the TYPEMAP parameter exists for that purpose.
May be set to an empty string, which is identical to -prototypes, or -noprototypes. See the xsubpp documentation for details. MakeMaker defaults to the empty string.
Your version number for the .xs file of this package. This defaults to the value of the VERSION attribute.
can be used to pass parameters to the methods which implement that part of the Makefile. Parameters are specified as a hash ref but are passed to the method as a hash.
{FILES => "*.xyz foo"}
{ANY_TARGET => ANY_DEPENDECY, ...}
(ANY_TARGET must not be given a double-colon rule by MakeMaker.)
{TARFLAGS => 'cvfF', COMPRESS => 'gzip', SUFFIX => '.gz',
SHAR => 'shar -m', DIST_CP => 'ln', ZIP => '/bin/zip',
ZIPFLAGS => '-rl', DIST_DEFAULT => 'private tardist' }
If you specify COMPRESS, then SUFFIX should also be altered, as it is needed to tell make the target file of the compression. Setting DIST_CP to ln can be useful, if you need to preserve the timestamps on your files. DIST_CP can take the values 'cp', which copies the file, 'ln', which links the file, and 'best' which copies symbolic links and links the rest. Default is 'best'.
{ARMAYBE => 'ar', OTHERLDFLAGS => '...', INST_DYNAMIC_DEP => '...'}
{LINKTYPE => 'static', 'dynamic' or ''}
NB: Extensions that have nothing but *.pm files had to say
{LINKTYPE => ''}
with Pre-5.0 MakeMakers. Since version 5.00 of MakeMaker such a line can be deleted safely. MakeMaker recognizes when there's nothing to be linked.
{ANY_MACRO => ANY_VALUE, ...}
Anything put here will be passed to MY::postamble() if you have one.
{FILES => '$(INST_ARCHAUTODIR)/*.xyz'}
{TESTS => 't/*.t'}
{MAXLEN => 8}
If you cannot achieve the desired Makefile behaviour by specifying attributes you may define private subroutines in the Makefile.PL. Each subroutine returns the text it wishes to have written to the Makefile. To override a section of the Makefile you can either say:
sub MY::c_o { "new literal text" }
or you can edit the default by saying something like:
package MY; # so that "SUPER" works right
sub c_o {
my $inherited = shift->SUPER::c_o(@_);
$inherited =~ s/old text/new text/;
$inherited;
}
If you are running experiments with embedding perl as a library into other applications, you might find MakeMaker is not sufficient. You'd better have a look at ExtUtils::Embed which is a collection of utilities for embedding.
If you still need a different solution, try to develop another subroutine that fits your needs and submit the diffs to makemaker@perl.org
For a complete description of all MakeMaker methods see ExtUtils::MM_Unix.
Here is a simple example of how to add a new target to the generated Makefile:
sub MY::postamble {
return <<'MAKE_FRAG';
$(MYEXTLIB): sdbm/Makefile
cd sdbm && $(MAKE) all
MAKE_FRAG
}
WriteMakefile() now does some basic sanity checks on its parameters to protect against typos and malformatted values. This means some things which happened to work in the past will now throw warnings and possibly produce internal errors.
Some of the most common mistakes:
<MAN3PODS = ' '>>
This is commonly used to supress the creation of man pages. MAN3PODS takes a hash ref not a string, but the above worked by accident in old versions of MakeMaker.
The correct code is <MAN3PODS = { }>>.
MakeMaker.pm uses the architecture specific information from Config.pm. In addition it evaluates architecture specific hints files in a hints/ directory. The hints files are expected to be named like their counterparts in PERL_SRC/hints, but with an .pl file name extension (eg. next_3_2.pl). They are simply evaled by MakeMaker within the WriteMakefile() subroutine, and can be used to execute commands as well as to include special variables. The rules which hintsfile is chosen are the same as in Configure.
The hintsfile is eval()ed immediately after the arguments given to WriteMakefile are stuffed into a hash reference $self but before this reference becomes blessed. So if you want to do the equivalent to override or create an attribute you would say something like
$self->{LIBS} = ['-ldbm -lucb -lc'];
For authors of extensions MakeMaker provides several Makefile targets. Most of the support comes from the ExtUtils::Manifest module, where additional documentation can be found.
reports which files are below the build directory but not in the MANIFEST file and vice versa. (See ExtUtils::Manifest::fullcheck() for details)
reports which files are skipped due to the entries in the MANIFEST.SKIP file (See ExtUtils::Manifest::skipcheck() for details)
does a realclean first and then the distcheck. Note that this is not needed to build a new distribution as long as you are sure that the MANIFEST file is ok.
rewrites the MANIFEST file, adding all remaining files found (See ExtUtils::Manifest::mkmanifest() for details)
Copies all the files that are in the MANIFEST file to a newly created directory with the name $(DISTNAME)-$(VERSION). If that directory exists, it will be removed first.
Additionally, it will create a META.yml module meta-data file and add this to your MANFIEST. You can shut this behavior off with the NO_META flag.
Makes a distdir first, and runs a perl Makefile.PL, a make, and a make test in that directory.
First does a distdir. Then a command $(PREOP) which defaults to a null command, followed by $(TOUNIX), which defaults to a null command under UNIX, and will convert files in distribution directory to UNIX format otherwise. Next it runs tar on that directory into a tarfile and deletes the directory. Finishes with a command $(POSTOP) which defaults to a null command.
Defaults to $(DIST_DEFAULT) which in turn defaults to tardist.
Runs a tardist first and uuencodes the tarfile.
First does a distdir. Then a command $(PREOP) which defaults to a null command. Next it runs shar on that directory into a sharfile and deletes the intermediate directory again. Finishes with a command $(POSTOP) which defaults to a null command. Note: For shdist to work properly a shar program that can handle directories is mandatory.
First does a distdir. Then a command $(PREOP) which defaults to a null command. Runs $(ZIP) $(ZIPFLAGS) on that directory into a zipfile. Then deletes that directory. Finishes with a command $(POSTOP) which defaults to a null command.
Does a $(CI) and a $(RCS_LABEL) on all files in the MANIFEST file.
Customization of the dist targets can be done by specifying a hash reference to the dist attribute of the WriteMakefile call. The following parameters are recognized:
CI ('ci -u')
COMPRESS ('gzip --best')
POSTOP ('@ :')
PREOP ('@ :')
TO_UNIX (depends on the system)
RCS_LABEL ('rcs -q -Nv$(VERSION_SYM):')
SHAR ('shar')
SUFFIX ('.gz')
TAR ('tar')
TARFLAGS ('cvf')
ZIP ('zip')
ZIPFLAGS ('-r')
An example:
WriteMakefile( 'dist' => { COMPRESS=>"bzip2", SUFFIX=>".bz2" })
Long plaguing users of MakeMaker based modules has been the problem of getting basic information about the module out of the sources without running the Makefile.PL and doing a bunch of messy heuristics on the resulting Makefile. To this end a simple module meta-data file has been introduced, META.yml.
META.yml is a YAML document (see http://www.yaml.org) containing basic information about the module (name, version, prerequisites...) in an easy to read format. The format is developed and defined by the Module::Build developers (see http://module-build.sourceforge.net/META-spec.html)
MakeMaker will automatically generate a META.yml file for you and add it to your MANIFEST as part of the 'distdir' target (and thus the 'dist' target). This is intended to seamlessly and rapidly populate CPAN with module meta-data. If you wish to shut this feature off, set the NO_META WriteMakefile() flag to true.
If some events detected in Makefile.PL imply that there is no way to create the Module, but this is a normal state of things, then you can create a Makefile which does nothing, but succeeds on all the "usual" build targets. To do so, use
ExtUtils::MakeMaker::WriteEmptyMakefile();
instead of WriteMakefile().
This may be useful if other modules expect this module to be built OK, as opposed to work OK (say, this system-dependent module builds in a subdirectory of some other distribution, or is listed as a dependency in a CPAN::Bundle, but the functionality is supported by different means on the current architecture).
my $value = prompt($message);
my $value = prompt($message, $default);
The prompt() function provides an easy way to request user input used to write a makefile. It displays the $message as a prompt for input. If a $default is provided it will be used as a default. The function returns the $value selected by the user.
If prompt() detects that it is not running interactively and there is nothing on STDIN or if the PERL_MM_USE_DEFAULT environment variable is set to true, the $default will be used without prompting. This prevents automated processes from blocking on user input.
If no $default is provided an empty string will be used instead.
Command line options used by MakeMaker->new(), and thus by WriteMakefile(). The string is split on whitespace, and the result is processed before any actual command line arguments are processed.
If set to a true value then MakeMaker's prompt function will always return the default without waiting for user input.
ExtUtils::MM_Unix, ExtUtils::Manifest ExtUtils::Install, ExtUtils::Embed
Andy Dougherty <doughera@lafayette.edu>, Andreas König <andreas.koenig@mind.de>, Tim Bunce <timb@cpan.org>. VMS support by Charles Bailey <bailey@newman.upenn.edu>. OS/2 support by Ilya Zakharevich <ilya@math.ohio-state.edu>.
Currently maintained by Michael G Schwern <schwern@pobox.com>
Send patches and ideas to <makemaker@perl.org>.
Send bug reports via http://rt.cpan.org/. Please send your generated Makefile along with your report.
For more up-to-date information, see http://www.makemaker.org.
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
See http://www.perl.com/perl/misc/Artistic.html |
タイトルを表示する
前回、matplotlibを使って、グラフを表示する方法を解説しました。
グラフは表示できたのですが、何とも味気ないグラフになっていましたよね。
やはりグラフを表示したら、タイトルやX軸、Y軸の名称、凡例、それぞれの文字のサイズの調整、補助線の表示なんかしたくなりませんか?
今回はグラフをデコレーションする方法です。
まずは前回のおさらいから。
%matplotlib notebook
from matplotlib import pyplot as plt
y_value = [1, 2, 4, 8, 16, 32, 64, 128, 256, 1028]
x_value = range(1, len(y_value)+1)
plt.plot(x_value, y_value)
plt.show()
実行結果
ここにまずはタイトルを追加します。
%matplotlib notebook
from matplotlib import pyplot as plt
y_value = [1, 2, 4, 8, 16, 32, 64, 128, 256, 1028]
x_value = range(1, len(y_value)+1)
plt.plot(x_value, y_value)
plt.title("Test Graph")
plt.show()
実行結果
plt.title(“タイトル名”)でタイトルが追加できました。
ただ残念ながら、日本語は使えないようです。
日本語を使うと、エラーは出ませんが、日本語入力したところが□□□となってしまいます。
フォントサイズが小さいようなので、大きくしてみます。
%matplotlib notebook
from matplotlib import pyplot as plt
y_value = [1, 2, 4, 8, 16, 32, 64, 128, 256, 1028]
x_value = range(1, len(y_value)+1)
plt.plot(x_value, y_value)
plt.title("Test Graph", {"fontsize": 20})
plt.show()
実行結果
plt.title(“タイトル名”, {“fontsize”: X})のXにフォントサイズを入力することで、タイトルの文字の大きさを調整できます。
これでタイトルが、好みのサイズで表示できるようになりました。
X軸名、Y軸名を表示する
次にX軸名、Y軸名を表示してみましょう。
%matplotlib notebook
from matplotlib import pyplot as plt
y_value = [1, 2, 4, 8, 16, 32, 64, 128, 256, 1028]
x_value = range(1, len(y_value)+1)
plt.plot(x_value, y_value)
plt.title("Test Graph", {"fontsize": 20})
plt.xlabel("Numbers")
plt.ylabel("Value")
plt.show()
実行結果
plt.xlabel(“X軸名”)でX軸の名称を表示、またplt.ylabel(“Y軸名”)でY軸の名称を表示できます。
こちらもフォントサイズが小さいので、大きくしてみます。
%matplotlib notebook
from matplotlib import pyplot as plt
y_value = [1, 2, 4, 8, 16, 32, 64, 128, 256, 1028]
x_value = range(1, len(y_value)+1)
plt.plot(x_value, y_value)
plt.title("Test Graph", {"fontsize": 20})
plt.xlabel("Numbers", {"fontsize": 20})
plt.ylabel("Value", {"fontsize": 20})
plt.show()
実行結果
これでX軸名、Y軸名の表示ができました。
ですが、それぞれの軸の数値が小さいので、こちらも大きくしてみましょう。
%matplotlib notebook
from matplotlib import pyplot as plt
y_value = [1, 2, 4, 8, 16, 32, 64, 128, 256, 1028]
x_value = range(1, len(y_value)+1)
plt.plot(x_value, y_value)
plt.title("Test Graph", {"fontsize": 20})
plt.xlabel("Numbers", {"fontsize": 20})
plt.ylabel("Value", {"fontsize": 20})
plt.tick_params(labelsize=20)
plt.show()
実行結果
plt.tick_params(labelsize=X)により、軸の数値のサイズを調整することができます。
ですが、グラフが見切れてしまいました。
グラフのサイズの調整をしてみましょう。
グラフサイズの調整をする
グラフサイズを変更するには、最初にサイズを指定します。
%matplotlib notebook
from matplotlib import pyplot as plt
y_value = [1, 2, 4, 8, 16, 32, 64, 128, 256, 1028]
x_value = range(1, len(y_value)+1)
fig=plt.figure(figsize=(10,6))
plt.plot(x_value, y_value)
plt.title("Test Graph", {"fontsize": 20})
plt.xlabel("Numbers", {"fontsize": 20})
plt.ylabel("Value", {"fontsize": 20})
plt.tick_params(labelsize=20)
plt.show()
実行結果
fig=plt.figure(figsize=(X,Y))とすることで、グラフの全体のサイズを変更することができます。
これで軸の名称が見切れることが無くなりました。
凡例を表示する
今回はグラフは1つですが、2つ以上のグラフを表示する時、どれがどのデータのグラフなのか区別をつけるため、判例が欲しくなってきます。
せっかくなのでここでデータを変更し、データを二つに増やしてみます。
先ほどのx_value、y_valueをそれぞれx1_value、y1_valueに変えて、y2_valueに100刻みの数値を10個もつリストを作成しました。
x2_valueはx1_valueと同様、1からy2_valueの要素数+1としました。
%matplotlib notebook
from matplotlib import pyplot as plt
y1_value = [1, 2, 4, 8, 16, 32, 64, 128, 256, 1028]
x1_value = range(1, len(y_value)+1)
y2_value = [100, 200, 300, 400, 500, 600, 700, 800, 900, 1000]
x2_value = range(1, len(y2_value)+1)
fig=plt.figure(figsize=(10,6))
plt.plot(x1_value, y1_value)
plt.plot(x2_value, y2_value)
plt.title("Test Graph", {"fontsize": 20})
plt.xlabel("Numbers", {"fontsize": 20})
plt.ylabel("Value", {"fontsize": 20})
plt.tick_params(labelsize=20)
plt.show()
実行結果
これに凡例を表示します。
%matplotlib notebook
from matplotlib import pyplot as plt
y1_value = [1, 2, 4, 8, 16, 32, 64, 128, 256, 1028]
x1_value = range(1, len(y_value)+1)
y2_value = [100, 200, 300, 400, 500, 600, 700, 800, 900, 1000]
x2_value = range(1, len(y2_value)+1)
fig=plt.figure(figsize=(10,6))
plt.plot(x1_value, y1_value, label="test1")
plt.plot(x2_value, y2_value, label="test2")
plt.title("Test Graph", {"fontsize": 20})
plt.xlabel("Numbers", {"fontsize": 20})
plt.ylabel("Value", {"fontsize": 20})
plt.tick_params(labelsize=20)
plt.legend()
plt.show()
実行結果
凡例を表示するためには、それぞれのグラフの名称を決めてあげなければいけません。
グラフを作成するplt.plot(X, Y)にlabel=”グラフ名”を追加することで、それぞれのグラフの名称を付けることができます。
そして、plt.legend()で凡例を表示することができます。
凡例のサイズを変えるには、plt.legend()の中にprop={“size”: X}を入れ、Xにフォントサイズを入力することで、凡例のサイズを変えることができます。
%matplotlib notebook
from matplotlib import pyplot as plt
y1_value = [1, 2, 4, 8, 16, 32, 64, 128, 256, 1028]
x1_value = range(1, len(y_value)+1)
y2_value = [100, 200, 300, 400, 500, 600, 700, 800, 900, 1000]
x2_value = range(1, len(y2_value)+1)
fig=plt.figure(figsize=(10,6))
plt.plot(x1_value, y1_value, label="test1")
plt.plot(x2_value, y2_value, label="test2")
plt.title("Test Graph", {"fontsize": 20})
plt.xlabel("Numbers", {"fontsize": 20})
plt.ylabel("Value", {"fontsize": 20})
plt.tick_params(labelsize=20)
plt.legend(prop={"size": 20})
plt.show()
実行結果
また凡例の場所を変更することも可能です。
その場合には、plt.legend()に、loc=”場所”で指定します。
場所を指定する方法は、文字列で指定する方法と、数値で指定する方法があります。
場所 文字列 数値
自動 best 0
右上 upper right 1
左上 upper left 2
左下 lower left 3
右下 lower right 4
右 right 5
真ん中左 center left 6
真ん中右 center right 7
真ん中下 lower center 8
真ん中上 upper center 9
真ん中 center 10
先ほどの凡例は左上だったので、右下に変更してみましょう。
%matplotlib notebook
from matplotlib import pyplot as plt
y1_value = [1, 2, 4, 8, 16, 32, 64, 128, 256, 1028]
x1_value = range(1, len(y_value)+1)
y2_value = [100, 200, 300, 400, 500, 600, 700, 800, 900, 1000]
x2_value = range(1, len(y2_value)+1)
fig=plt.figure(figsize=(10,6))
plt.plot(x1_value, y1_value, label="test1")
plt.plot(x2_value, y2_value, label="test2")
plt.title("Test Graph", {"fontsize": 20})
plt.xlabel("Numbers", {"fontsize": 20})
plt.ylabel("Value", {"fontsize": 20})
plt.tick_params(labelsize=20)
plt.legend(prop={"size": 20}, loc="lower right")
plt.show()
実行結果
これで凡例も自由自在ですね。
グリッド(補助線)を表示する
次にグリッド(補助線)を表示してみましょう。
グリッドを表示するには、plt.grid()を追加するだけです。
%matplotlib notebook
from matplotlib import pyplot as plt
y1_value = [1, 2, 4, 8, 16, 32, 64, 128, 256, 1028]
x1_value = range(1, len(y_value)+1)
y2_value = [100, 200, 300, 400, 500, 600, 700, 800, 900, 1000]
x2_value = range(1, len(y2_value)+1)
fig=plt.figure(figsize=(10,6))
plt.plot(x1_value, y1_value, label="test1")
plt.plot(x2_value, y2_value, label="test2")
plt.title("Test Graph", {"fontsize": 20})
plt.xlabel("Numbers", {"fontsize": 20})
plt.ylabel("Value", {"fontsize": 20})
plt.tick_params(labelsize=20)
plt.legend(prop={"size": 20}, loc="lower right")
plt.grid()
plt.show()
実行結果
X軸だけ、Y軸だけ表示することもできます。
X軸だけ表示するときは、plt.grid(axis=”x”)、Y軸だけ表示するときは、plt.grid(axis=”y”)とします。
まずはX軸だけ補助線を表示してみます。
%matplotlib notebook
from matplotlib import pyplot as plt
y1_value = [1, 2, 4, 8, 16, 32, 64, 128, 256, 1028]
x1_value = range(1, len(y_value)+1)
y2_value = [100, 200, 300, 400, 500, 600, 700, 800, 900, 1000]
x2_value = range(1, len(y2_value)+1)
fig=plt.figure(figsize=(10,6))
plt.plot(x1_value, y1_value, label="test1")
plt.plot(x2_value, y2_value, label="test2")
plt.title("Test Graph", {"fontsize": 20})
plt.xlabel("Numbers", {"fontsize": 20})
plt.ylabel("Value", {"fontsize": 20})
plt.tick_params(labelsize=20)
plt.legend(prop={"size": 20}, loc="lower right")
plt.grid(axis="x")
plt.show()
実行結果
次にY軸だけ補助線を表示してみます。
%matplotlib notebook
from matplotlib import pyplot as plt
y1_value = [1, 2, 4, 8, 16, 32, 64, 128, 256, 1028]
x1_value = range(1, len(y_value)+1)
y2_value = [100, 200, 300, 400, 500, 600, 700, 800, 900, 1000]
x2_value = range(1, len(y2_value)+1)
fig=plt.figure(figsize=(10,6))
plt.plot(x1_value, y1_value, label="test1")
plt.plot(x2_value, y2_value, label="test2")
plt.title("Test Graph", {"fontsize": 20})
plt.xlabel("Numbers", {"fontsize": 20})
plt.ylabel("Value", {"fontsize": 20})
plt.tick_params(labelsize=20)
plt.legend(prop={"size": 20}, loc="lower right")
plt.grid(axis="y")
plt.show()
実行結果
これで大体、全体の見栄えは整いました。
ここまでは2本の折れ線グラフなので、分かりやすいですが、データが増えてくると、何本も折れ線グラフが出てきて、分かりにくくなってきます。
そこで、次は線の変更方法を解説していきます。
ということで今回はこんな感じで。 |
ExtUtils::MakeMaker - Create a module Makefile
use ExtUtils::MakeMaker;
WriteMakefile( ATTRIBUTE => VALUE [, ...] );
This utility is designed to write a Makefile for an extension module from a Makefile.PL. It is based on the Makefile.SH model provided by Andy Dougherty and the perl5-porters.
It splits the task of generating the Makefile into several subroutines that can be individually overridden. Each subroutine returns the text it wishes to have written to the Makefile.
MakeMaker is object oriented. Each directory below the current directory that contains a Makefile.PL is treated as a separate object. This makes it possible to write an unlimited number of Makefiles with a single invocation of WriteMakefile().
See ExtUtils::MakeMaker::Tutorial.
The long answer is the rest of the manpage :-)
The generated Makefile enables the user of the extension to invoke
perl Makefile.PL # optionally "perl Makefile.PL verbose"
make
make test # optionally set TEST_VERBOSE=1
make install # See below
The Makefile to be produced may be altered by adding arguments of the form KEY=VALUE. E.g.
perl Makefile.PL PREFIX=/tmp/myperl5
Other interesting targets in the generated Makefile are
make config # to check if the Makefile is up-to-date
make clean # delete local temp files (Makefile gets renamed)
make realclean # delete derived files (including ./blib)
make ci # check in all the files in the MANIFEST file
make dist # see below the Distribution Support section
MakeMaker checks for the existence of a file named test.pl in the current directory and if it exists it execute the script with the proper set of perl -I options.
MakeMaker also checks for any files matching glob("t/*.t"). It will execute all matching files in alphabetical order via the Test::Harness module with the -I switches set correctly.
If you'd like to see the raw output of your tests, set the TEST_VERBOSE variable to true.
make test TEST_VERBOSE=1
A useful variation of the above is the target testdb. It runs the test under the Perl debugger (see perldebug). If the file test.pl exists in the current directory, it is used for the test.
If you want to debug some other testfile, set the TEST_FILE variable thusly:
make testdb TEST_FILE=t/mytest.t
By default the debugger is called using -d option to perl. If you want to specify some other option, set the TESTDB_SW variable:
make testdb TESTDB_SW=-Dx
make alone puts all relevant files into directories that are named by the macros INST_LIB, INST_ARCHLIB, INST_SCRIPT, INST_MAN1DIR and INST_MAN3DIR. All these default to something below ./blib if you are not building below the perl source directory. If you are building below the perl source, INST_LIB and INST_ARCHLIB default to ../../lib, and INST_SCRIPT is not defined.
The install target of the generated Makefile copies the files found below each of the INST_* directories to their INSTALL* counterparts. Which counterparts are chosen depends on the setting of INSTALLDIRS according to the following table:
INSTALLDIRS set to perl site vendor PERLPREFIX SITEPREFIX VENDORPREFIXINST_ARCHLIB INSTALLARCHLIB INSTALLSITEARCH INSTALLVENDORARCHINST_LIB INSTALLPRIVLIB INSTALLSITELIB INSTALLVENDORLIBINST_BIN INSTALLBIN INSTALLSITEBIN INSTALLVENDORBININST_SCRIPT INSTALLSCRIPT INSTALLSCRIPT INSTALLSCRIPTINST_MAN1DIR INSTALLMAN1DIR INSTALLSITEMAN1DIR INSTALLVENDORMAN1DIRINST_MAN3DIR INSTALLMAN3DIR INSTALLSITEMAN3DIR INSTALLVENDORMAN3DIR
The INSTALL... macros in turn default to their %Config ($Config{installprivlib}, $Config{installarchlib}, etc.) counterparts.
You can check the values of these variables on your system with
perl '-V:install.*'
And to check the sequence in which the library directories are searched by perl, run
perl -le 'print join $/, @INC'
PREFIX and LIB can be used to set several INSTALL* attributes in one go. The quickest way to install a module in a non-standard place might be
perl Makefile.PL PREFIX=~
This will install all files in the module under your home directory, with man pages and libraries going into an appropriate place (usually ~/man and ~/lib).
Another way to specify many INSTALL directories with a single parameter is LIB.
perl Makefile.PL LIB=~/lib
This will install the module's architecture-independent files into ~/lib, the architecture-dependent files into ~/lib/$archname.
Note, that in both cases the tilde expansion is done by MakeMaker, not by perl by default, nor by make.
Conflicts between parameters LIB, PREFIX and the various INSTALL* arguments are resolved so that:
setting LIB overrides any setting of INSTALLPRIVLIB, INSTALLARCHLIB, INSTALLSITELIB, INSTALLSITEARCH (and they are not affected by PREFIX);
without LIB, setting PREFIX replaces the initial $Config{prefix} part of those INSTALL* arguments, even if the latter are explicitly set (but are set to still start with $Config{prefix}).
If the user has superuser privileges, and is not working on AFS or relatives, then the defaults for INSTALLPRIVLIB, INSTALLARCHLIB, INSTALLSCRIPT, etc. will be appropriate, and this incantation will be the best:
perl Makefile.PL;
make;
make test
make install
make install per default writes some documentation of what has been done into the file $(INSTALLARCHLIB)/perllocal.pod. This feature can be bypassed by calling make pure_install.
will have to specify the installation directories as these most probably have changed since perl itself has been installed. They will have to do this by calling
perl Makefile.PL INSTALLSITELIB=/afs/here/today \
INSTALLSCRIPT=/afs/there/now INSTALLMAN3DIR=/afs/for/manpages
make
Be careful to repeat this procedure every time you recompile an extension, unless you are sure the AFS installation directories are still valid.
An extension that is built with the above steps is ready to use on systems supporting dynamic loading. On systems that do not support dynamic loading, any newly created extension has to be linked together with the available resources. MakeMaker supports the linking process by creating appropriate targets in the Makefile whenever an extension is built. You can invoke the corresponding section of the makefile with
make perl
That produces a new perl binary in the current directory with all extensions linked in that can be found in INST_ARCHLIB, SITELIBEXP, and PERL_ARCHLIB. To do that, MakeMaker writes a new Makefile, on UNIX, this is called Makefile.aperl (may be system dependent). If you want to force the creation of a new perl, it is recommended, that you delete this Makefile.aperl, so the directories are searched-through for linkable libraries again.
The binary can be installed into the directory where perl normally resides on your machine with
make inst_perl
To produce a perl binary with a different name than perl, either say
perl Makefile.PL MAP_TARGET=myperl
make myperl
make inst_perl
or say
perl Makefile.PL
make myperl MAP_TARGET=myperl
make inst_perl MAP_TARGET=myperl
In any case you will be prompted with the correct invocation of the inst_perl target that installs the new binary into INSTALLBIN.
make inst_perl per default writes some documentation of what has been done into the file $(INSTALLARCHLIB)/perllocal.pod. This can be bypassed by calling make pure_inst_perl.
Warning: the inst_perl: target will most probably overwrite your existing perl binary. Use with care!
Sometimes you might want to build a statically linked perl although your system supports dynamic loading. In this case you may explicitly set the linktype with the invocation of the Makefile.PL or make:
perl Makefile.PL LINKTYPE=static # recommended
or
make LINKTYPE=static # works on most systems
MakeMaker needs to know, or to guess, where certain things are located. Especially INST_LIB and INST_ARCHLIB (where to put the files during the make(1) run), PERL_LIB and PERL_ARCHLIB (where to read existing modules from), and PERL_INC (header files and libperl*.*).
Extensions may be built either using the contents of the perl source directory tree or from the installed perl library. The recommended way is to build extensions after you have run 'make install' on perl itself. You can do that in any directory on your hard disk that is not below the perl source tree. The support for extensions below the ext directory of the perl distribution is only good for the standard extensions that come with perl.
If an extension is being built below the ext/ directory of the perl source then MakeMaker will set PERL_SRC automatically (e.g., ../..). If PERL_SRC is defined and the extension is recognized as a standard extension, then other variables default to the following:
PERL_INC = PERL_SRC
PERL_LIB = PERL_SRC/lib
PERL_ARCHLIB = PERL_SRC/lib
INST_LIB = PERL_LIB
INST_ARCHLIB = PERL_ARCHLIB
If an extension is being built away from the perl source then MakeMaker will leave PERL_SRC undefined and default to using the installed copy of the perl library. The other variables default to the following:
PERL_INC = $archlibexp/CORE
PERL_LIB = $privlibexp
PERL_ARCHLIB = $archlibexp
INST_LIB = ./blib/lib
INST_ARCHLIB = ./blib/arch
If perl has not yet been installed then PERL_SRC can be defined on the command line as shown in the previous section.
If you don't want to keep the defaults for the INSTALL* macros, MakeMaker helps you to minimize the typing needed: the usual relationship between INSTALLPRIVLIB and INSTALLARCHLIB is determined by Configure at perl compilation time. MakeMaker supports the user who sets INSTALLPRIVLIB. If INSTALLPRIVLIB is set, but INSTALLARCHLIB not, then MakeMaker defaults the latter to be the same subdirectory of INSTALLPRIVLIB as Configure decided for the counterparts in %Config , otherwise it defaults to INSTALLPRIVLIB. The same relationship holds for INSTALLSITELIB and INSTALLSITEARCH.
MakeMaker gives you much more freedom than needed to configure internal variables and get different results. It is worth to mention, that make(1) also lets you configure most of the variables that are used in the Makefile. But in the majority of situations this will not be necessary, and should only be done if the author of a package recommends it (or you know what you're doing).
The following attributes may be specified as arguments to WriteMakefile() or as NAME=VALUE pairs on the command line.
One line description of the module. Will be included in PPD file.
Name of the file that contains the package description. MakeMaker looks for a line in the POD matching /^($package\s-\s)(.*)/. This is typically the first line in the "=head1 NAME" section. $2 becomes the abstract.
String containing name (and email address) of package author(s). Is used in PPD (Perl Package Description) files for PPM (Perl Package Manager).
Used when creating PPD files for binary packages. It can be set to a full or relative path or URL to the binary archive for a particular architecture. For example:
perl Makefile.PL BINARY_LOCATION=x86/Agent.tar.gz
builds a PPD package that references a binary of the Agent package, located in the x86 directory relative to the PPD itself.
Ref to array of *.c file names. Initialised from a directory scan and the values portion of the XS attribute hash. This is not currently used by MakeMaker but may be handy in Makefile.PLs.
String that will be included in the compiler call command line between the arguments INC and OPTIMIZE.
Arrayref. E.g. [qw(archname manext)] defines ARCHNAME & MANEXT from config.sh. MakeMaker will add to CONFIG the following values anyway: ar cc cccdlflags ccdlflags dlext dlsrc ld lddlflags ldflags libc lib_ext obj_ext ranlib sitelibexp sitearchexp so
CODE reference. The subroutine should return a hash reference. The hash may contain further attributes, e.g. {LIBS => ...}, that have to be determined by some evaluation method.
Something like "-DHAVE_UNISTD_H"
This is the root directory into which the code will be installed. It prepends itself to the normal prefix. For example, if your code would normally go into /usr/local/lib/perl you could set DESTDIR=/tmp/ and installation would go into /tmp/usr/local/lib/perl.
This is primarily of use for people who repackage Perl modules.
NOTE: Due to the nature of make, it is important that you put the trailing slash on your DESTDIR. "/tmp/" not "/tmp".
Ref to array of subdirectories containing Makefile.PLs e.g. [ 'sdbm' ] in ext/SDBM_File
A safe filename for the package.
Defaults to NAME above but with :: replaced with -.
For example, Foo::Bar becomes Foo-Bar.
Your name for distributing the package with the version number included. This is used by 'make dist' to name the resulting archive file.
Defaults to DISTNAME-VERSION.
For example, version 1.04 of Foo::Bar becomes Foo-Bar-1.04.
On some OS's where . has special meaning VERSION_SYM may be used in place of VERSION.
Hashref of symbol names for routines to be made available as universal symbols. Each key/value pair consists of the package name and an array of routine names in that package. Used only under AIX, OS/2, VMS and Win32 at present. The routine names supplied will be expanded in the same way as XSUB names are expanded by the XS() macro. Defaults to
{"$(NAME)" => ["boot_$(NAME)" ] }
e.g.
{"RPC" => [qw( boot_rpcb rpcb_gettime getnetconfigent )],
"NetconfigPtr" => [ 'DESTROY'] }
Please see the ExtUtils::Mksymlists documentation for more information about the DL_FUNCS, DL_VARS and FUNCLIST attributes.
Array of symbol names for variables to be made available as universal symbols. Used only under AIX, OS/2, VMS and Win32 at present. Defaults to []. (e.g. [ qw(Foo_version Foo_numstreams Foo_tree ) ])
Array of extension names to exclude when doing a static build. This is ignored if INCLUDE_EXT is present. Consult INCLUDE_EXT for more details. (e.g. [ qw( Socket POSIX ) ] )
This attribute may be most useful when specified as a string on the command line: perl Makefile.PL EXCLUDE_EXT='Socket Safe'
Ref to array of executable files. The files will be copied to the INST_SCRIPT directory. Make realclean will delete them from there again.
If your executables start with something like #!perl or #!/usr/bin/perl MakeMaker will change this to the path of the perl 'Makefile.PL' was invoked with so the programs will be sure to run properly even if perl is not in /usr/bin/perl.
The name of the Makefile to be produced. This is used for the second Makefile that will be produced for the MAP_TARGET.
Defaults to 'Makefile' or 'Descrip.MMS' on VMS.
(Note: we couldn't use MAKEFILE because dmake uses this for something else).
Perl binary able to run this extension, load XS modules, etc...
Like PERLRUN, except it uses FULLPERL.
Like PERLRUNINST, except it uses FULLPERL.
This provides an alternate means to specify function names to be exported from the extension. Its value is a reference to an array of function names to be exported by the extension. These names are passed through unaltered to the linker options file.
Ref to array of *.h file names. Similar to C.
This attribute is used to specify names to be imported into the extension. Takes a hash ref.
It is only used on OS/2 and Win32.
Include file dirs eg: "-I/usr/5include -I/path/to/inc"
Array of extension names to be included when doing a static build. MakeMaker will normally build with all of the installed extensions when doing a static build, and that is usually the desired behavior. If INCLUDE_EXT is present then MakeMaker will build only with those extensions which are explicitly mentioned. (e.g. [ qw( Socket POSIX ) ])
It is not necessary to mention DynaLoader or the current extension when filling in INCLUDE_EXT. If the INCLUDE_EXT is mentioned but is empty then only DynaLoader and the current extension will be included in the build.
This attribute may be most useful when specified as a string on the command line: perl Makefile.PL INCLUDE_EXT='POSIX Socket Devel::Peek'
Used by 'make install', which copies files from INST_ARCHLIB to this directory if INSTALLDIRS is set to perl.
Directory to install binary files (e.g. tkperl) into if INSTALLDIRS=perl.
Determines which of the sets of installation directories to choose: perl, site or vendor. Defaults to site.
These directories get the man pages at 'make install' time if INSTALLDIRS=perl. Defaults to $Config{installman*dir}.
If set to 'none', no man pages will be installed.
Used by 'make install', which copies files from INST_LIB to this directory if INSTALLDIRS is set to perl.
Defaults to $Config{installprivlib}.
Used by 'make install' which copies files from INST_SCRIPT to this directory.
Used by 'make install', which copies files from INST_ARCHLIB to this directory if INSTALLDIRS is set to site (default).
Used by 'make install', which copies files from INST_BIN to this directory if INSTALLDIRS is set to site (default).
Used by 'make install', which copies files from INST_LIB to this directory if INSTALLDIRS is set to site (default).
These directories get the man pages at 'make install' time if INSTALLDIRS=site (default). Defaults to $(SITEPREFIX)/man/man$(MAN*EXT).
If set to 'none', no man pages will be installed.
Used by 'make install', which copies files from INST_ARCHLIB to this directory if INSTALLDIRS is set to vendor.
Used by 'make install', which copies files from INST_BIN to this directory if INSTALLDIRS is set to vendor.
Used by 'make install', which copies files from INST_LIB to this directory if INSTALLDIRS is set to vendor.
These directories get the man pages at 'make install' time if INSTALLDIRS=vendor. Defaults to $(VENDORPREFIX)/man/man$(MAN*EXT).
If set to 'none', no man pages will be installed.
Same as INST_LIB for architecture dependent files.
Directory to put real binary files during 'make'. These will be copied to INSTALLBIN during 'make install'
Directory where we put library files of this extension while building it.
Directory to hold the man pages at 'make' time
Directory to hold the man pages at 'make' time
Directory, where executable files should be installed during 'make'. Defaults to "./blib/script", just to have a dummy location during testing. make install will copy the files in INST_SCRIPT to INSTALLSCRIPT.
Program to be used to link libraries for dynamic loading.
Defaults to $Config{ld}.
Any special flags that might need to be passed to ld to create a shared library suitable for dynamic loading. It is up to the makefile to use it. (See "lddlflags" in Config)
Defaults to $Config{lddlflags}.
Defaults to "$(OBJECT)" and is used in the ld command to specify what files to link/load from (also see dynamic_lib below for how to specify ld flags)
LIB should only be set at perl Makefile.PL time but is allowed as a MakeMaker argument. It has the effect of setting both INSTALLPRIVLIB and INSTALLSITELIB to that value regardless any explicit setting of those arguments (or of PREFIX). INSTALLARCHLIB and INSTALLSITEARCH are set to the corresponding architecture subdirectory.
The filename of the perllibrary that will be used together with this extension. Defaults to libperl.a.
An anonymous array of alternative library specifications to be searched for (in order) until at least one library is found. E.g.
'LIBS' => ["-lgdbm", "-ldbm -lfoo", "-L/path -ldbm.nfs"]
Mind, that any element of the array contains a complete set of arguments for the ld command. So do not specify
'LIBS' => ["-ltcl", "-ltk", "-lX11"]
See ODBM_File/Makefile.PL for an example, where an array is needed. If you specify a scalar as in
'LIBS' => "-ltcl -ltk -lX11"
MakeMaker will turn it into an array with one element.
'static' or 'dynamic' (default unless usedl=undef in config.sh). Should only be used to force static linking (also see linkext below).
Boolean which tells MakeMaker, that it should include the rules to make a perl. This is handled automatically as a switch by MakeMaker. The user normally does not need it.
When 'make clean' or similar is run, the $(FIRST_MAKEFILE) will be backed up at this location.
Defaults to $(FIRST_MAKEFILE).old or $(FIRST_MAKEFILE)_old on VMS.
Hashref of pod-containing files. MakeMaker will default this to all EXE_FILES files that include POD directives. The files listed here will be converted to man pages and installed as was requested at Configure time.
Hashref that assigns to *.pm and *.pod files the files into which the manpages are to be written. MakeMaker parses all *.pod and *.pm files for POD directives. Files that contain POD will be the default keys of the MAN3PODS hashref. These will then be converted to man pages during make and will be installed during make install.
If it is intended, that a new perl binary be produced, this variable may hold a name for that binary. Defaults to perl
If the extension links to a library that it builds set this to the name of the library (see SDBM_File)
Perl module name for this extension (DBD::Oracle). This will default to the directory name but should be explicitly defined in the Makefile.PL.
MakeMaker will figure out if an extension contains linkable code anywhere down the directory tree, and will set this variable accordingly, but you can speed it up a very little bit if you define this boolean variable yourself.
Command so make does not print the literal commands its running.
By setting it to an empty string you can generate a Makefile that prints all commands. Mainly used in debugging MakeMaker itself.
Defaults to @.
Boolean. Attribute to inhibit descending into subdirectories.
When true, suppresses the generation and addition to the MANIFEST of the META.yml module meta-data file during 'make distdir'.
Defaults to false.
In general, any generated Makefile checks for the current version of MakeMaker and the version the Makefile was built under. If NO_VC is set, the version check is neglected. Do not write this into your Makefile.PL, use it interactively instead.
List of object files, defaults to '$(BASEEXT)$(OBJ_EXT)', but can be a long string containing all object files, e.g. "tkpBind.o tkpButton.o tkpCanvas.o"
(Where BASEEXT is the last component of NAME, and OBJ_EXT is $Config{obj_ext}.)
Defaults to -O. Set it to -g to turn debugging on. The flag is passed to subdirectory makes.
Perl binary for tasks that can be done by miniperl
Set only when MakeMaker is building the extensions of the Perl core distribution.
The call to the program that is able to compile perlmain.c. Defaults to $(CC).
Same as for PERL_LIB, but for architecture dependent files.
Used only when MakeMaker is building the extensions of the Perl core distribution (because normally $(PERL_ARCHLIB) is automatically in @INC, and adding it would get in the way of PERL5LIB).
Directory containing the Perl library to use.
Used only when MakeMaker is building the extensions of the Perl core distribution (because normally $(PERL_LIB) is automatically in @INC, and adding it would get in the way of PERL5LIB).
defaults to 0. Should be set to TRUE if the extension can work with the memory allocation routines substituted by the Perl malloc() subsystem. This should be applicable to most extensions with exceptions of those
with bugs in memory allocations which are caught by Perl's malloc();
which interact with the memory allocator in other ways than via malloc(), realloc(), free(), calloc(), sbrk() and brk();
which rely on special alignment which is not provided by Perl's malloc().
NOTE. Negligence to set this flag in any one of loaded extension nullifies many advantages of Perl's malloc(), such as better usage of system resources, error detection, memory usage reporting, catchable failure of memory allocations, etc.
Directory under which core modules are to be installed.
Defaults to $Config{installprefixexp} falling back to $Config{installprefix}, $Config{prefixexp} or $Config{prefix} should $Config{installprefixexp} not exist.
Overridden by PREFIX.
Use this instead of $(PERL) when you wish to run perl. It will set up extra necessary flags for you.
Use this instead of $(PERL) when you wish to run perl to work with modules. It will add things like -I$(INST_ARCH) and other necessary flags so perl can see the modules you're about to install.
Directory containing the Perl source code (use of this should be avoided, it may be undefined)
Desired permission for read/writable files. Defaults to 644. See also "perm_rw" in MM_Unix.
Desired permission for executable files. Defaults to 755. See also "perm_rwx" in MM_Unix.
Ref to hash of files to be processed as perl programs. MakeMaker will default to any found *.PL file (except Makefile.PL) being keys and the basename of the file being the value. E.g.
{'foobar.PL' => 'foobar'}
The *.PL files are expected to produce output to the target files themselves. If multiple files can be generated from the same *.PL file then the value in the hash can be a reference to an array of target file names. E.g.
{'foobar.PL' => ['foobar1','foobar2']}
Hashref of .pm files and *.pl files to be installed. e.g.
{'name_of_file.pm' => '$(INST_LIBDIR)/install_as.pm'}
By default this will include *.pm and *.pl and the files found in the PMLIBDIRS directories. Defining PM in the Makefile.PL will override PMLIBDIRS.
Ref to array of subdirectories containing library files. Defaults to [ 'lib', $(BASEEXT) ]. The directories will be scanned and any files they contain will be installed in the corresponding location in the library. A libscan() method can be used to alter the behaviour. Defining PM in the Makefile.PL will override PMLIBDIRS.
(Where BASEEXT is the last component of NAME.)
A filter program, in the traditional Unix sense (input from stdin, output to stdout) that is passed on each .pm file during the build (in the pm_to_blib() phase). It is empty by default, meaning no filtering is done.
Great care is necessary when defining the command if quoting needs to be done. For instance, you would need to say:
{'PM_FILTER' => 'grep -v \\"^\\#\\"'}
to remove all the leading coments on the fly during the build. The extra \\ are necessary, unfortunately, because this variable is interpolated within the context of a Perl program built on the command line, and double quotes are what is used with the -e switch to build that command line. The # is escaped for the Makefile, since what is going to be generated will then be:
PM_FILTER = grep -v \"^\#\"
Without the \\ before the #, we'd have the start of a Makefile comment, and the macro would be incorrectly defined.
Release 5.005 grandfathered old global symbol names by providing preprocessor macros for extension source compatibility. As of release 5.6, these preprocessor definitions are not available by default. The POLLUTE flag specifies that the old names should still be defined:
perl Makefile.PL POLLUTE=1
Please inform the module author if this is necessary to successfully install a module under 5.6 or later.
Name of the executable used to run PPM_INSTALL_SCRIPT below. (e.g. perl)
Name of the script that gets executed by the Perl Package Manager after the installation of a package.
This overrides all the default install locations. Man pages, libraries, scripts, etc... MakeMaker will try to make an educated guess about where to place things under the new PREFIX based on your Config defaults. Failing that, it will fall back to a structure which should be sensible for your platform.
If you specify LIB or any INSTALL* variables they will not be effected by the PREFIX.
Bool. If this parameter is true, failing to have the required modules (or the right versions thereof) will be fatal. perl Makefile.PL will die with the proper message.
Note: see Test::Harness for a shortcut for stopping tests early if you are missing dependencies.
Do not use this parameter for simple requirements, which could be resolved at a later time, e.g. after an unsuccessful make test of your module.
It is extremely rare to have to use PREREQ_FATAL at all!
Hashref: Names of modules that need to be available to run this extension (e.g. Fcntl for SDBM_File) are the keys of the hash and the desired version is the value. If the required version number is 0, we only check if any version is installed already.
Bool. If this parameter is true, the prerequisites will be printed to stdout and MakeMaker will exit. The output format is an evalable hash ref.
$PREREQ_PM = { 'A::B' => Vers1, 'C::D' => Vers2, ... };
RedHatism for PREREQ_PRINT. The output format is different, though:
perl(A::B)>=Vers1 perl(C::D)>=Vers2 ...
Like PERLPREFIX, but only for the site install locations.
Defaults to $Config{siteprefixexp}. Perls prior to 5.6.0 didn't have an explicit siteprefix in the Config. In those cases $Config{installprefix} will be used.
Overridable by PREFIX
Arrayref. E.g. [qw(name1 name2)] skip (do not write) sections of the Makefile. Caution! Do not use the SKIP attribute for the negligible speedup. It may seriously damage the resulting Makefile. Only use it if you really need it.
Ref to array of typemap file names. Use this when the typemaps are in some directory other than the current directory or when they are not named typemap. The last typemap in the list takes precedence. A typemap in the current directory has highest precedence, even if it isn't listed in TYPEMAPS. The default system typemap has lowest precedence.
Like PERLPREFIX, but only for the vendor install locations.
Defaults to $Config{vendorprefixexp}.
Overridable by PREFIX
If true, make install will be verbose
Your version number for distributing the package. This defaults to 0.1.
Instead of specifying the VERSION in the Makefile.PL you can let MakeMaker parse a file to determine the version number. The parsing routine requires that the file named by VERSION_FROM contains one single line to compute the version number. The first line in the file that contains the regular expression
/([\$*])(([\w\:\']*)\bVERSION)\b.*\=/
will be evaluated with eval() and the value of the named variable after the eval() will be assigned to the VERSION attribute of the MakeMaker object. The following lines will be parsed o.k.:
$VERSION = '1.00';
*VERSION = \'1.01';
$VERSION = sprintf "%d.%03d", q$Revision: 1.133 $ =~ /(\d+)/g;
$FOO::VERSION = '1.10';
*FOO::VERSION = \'1.11';
our $VERSION = 1.2.3; # new for perl5.6.0
but these will fail:
my $VERSION = '1.01';
local $VERSION = '1.02';
local $FOO::VERSION = '1.30';
(Putting my or local on the preceding line will work o.k.)
The file named in VERSION_FROM is not added as a dependency to Makefile. This is not really correct, but it would be a major pain during development to have to rewrite the Makefile for any smallish change in that file. If you want to make sure that the Makefile contains the correct VERSION macro after any change of the file, you would have to do something like
depend => { Makefile => '$(VERSION_FROM)' }
See attribute depend below.
A sanitized VERSION with . replaced by _. For places where . has special meaning (some filesystems, RCS labels, etc...)
Hashref of .xs files. MakeMaker will default this. e.g.
{'name_of_file.xs' => 'name_of_file.c'}
The .c files will automatically be included in the list of files deleted by a make clean.
String of options to pass to xsubpp. This might include -C++ or -extern. Do not include typemaps here; the TYPEMAP parameter exists for that purpose.
May be set to an empty string, which is identical to -prototypes, or -noprototypes. See the xsubpp documentation for details. MakeMaker defaults to the empty string.
Your version number for the .xs file of this package. This defaults to the value of the VERSION attribute.
can be used to pass parameters to the methods which implement that part of the Makefile. Parameters are specified as a hash ref but are passed to the method as a hash.
{FILES => "*.xyz foo"}
{ANY_TARGET => ANY_DEPENDECY, ...}
(ANY_TARGET must not be given a double-colon rule by MakeMaker.)
{TARFLAGS => 'cvfF', COMPRESS => 'gzip', SUFFIX => '.gz',
SHAR => 'shar -m', DIST_CP => 'ln', ZIP => '/bin/zip',
ZIPFLAGS => '-rl', DIST_DEFAULT => 'private tardist' }
If you specify COMPRESS, then SUFFIX should also be altered, as it is needed to tell make the target file of the compression. Setting DIST_CP to ln can be useful, if you need to preserve the timestamps on your files. DIST_CP can take the values 'cp', which copies the file, 'ln', which links the file, and 'best' which copies symbolic links and links the rest. Default is 'best'.
{ARMAYBE => 'ar', OTHERLDFLAGS => '...', INST_DYNAMIC_DEP => '...'}
{LINKTYPE => 'static', 'dynamic' or ''}
NB: Extensions that have nothing but *.pm files had to say
{LINKTYPE => ''}
with Pre-5.0 MakeMakers. Since version 5.00 of MakeMaker such a line can be deleted safely. MakeMaker recognizes when there's nothing to be linked.
{ANY_MACRO => ANY_VALUE, ...}
Anything put here will be passed to MY::postamble() if you have one.
{FILES => '$(INST_ARCHAUTODIR)/*.xyz'}
{TESTS => 't/*.t'}
{MAXLEN => 8}
If you cannot achieve the desired Makefile behaviour by specifying attributes you may define private subroutines in the Makefile.PL. Each subroutine returns the text it wishes to have written to the Makefile. To override a section of the Makefile you can either say:
sub MY::c_o { "new literal text" }
or you can edit the default by saying something like:
package MY; # so that "SUPER" works right
sub c_o {
my $inherited = shift->SUPER::c_o(@_);
$inherited =~ s/old text/new text/;
$inherited;
}
If you are running experiments with embedding perl as a library into other applications, you might find MakeMaker is not sufficient. You'd better have a look at ExtUtils::Embed which is a collection of utilities for embedding.
If you still need a different solution, try to develop another subroutine that fits your needs and submit the diffs to makemaker@perl.org
For a complete description of all MakeMaker methods see ExtUtils::MM_Unix.
Here is a simple example of how to add a new target to the generated Makefile:
sub MY::postamble {
return <<'MAKE_FRAG';
$(MYEXTLIB): sdbm/Makefile
cd sdbm && $(MAKE) all
MAKE_FRAG
}
WriteMakefile() now does some basic sanity checks on its parameters to protect against typos and malformatted values. This means some things which happened to work in the past will now throw warnings and possibly produce internal errors.
Some of the most common mistakes:
<MAN3PODS = ' '>>
This is commonly used to supress the creation of man pages. MAN3PODS takes a hash ref not a string, but the above worked by accident in old versions of MakeMaker.
The correct code is <MAN3PODS = { }>>.
MakeMaker.pm uses the architecture specific information from Config.pm. In addition it evaluates architecture specific hints files in a hints/ directory. The hints files are expected to be named like their counterparts in PERL_SRC/hints, but with an .pl file name extension (eg. next_3_2.pl). They are simply evaled by MakeMaker within the WriteMakefile() subroutine, and can be used to execute commands as well as to include special variables. The rules which hintsfile is chosen are the same as in Configure.
The hintsfile is eval()ed immediately after the arguments given to WriteMakefile are stuffed into a hash reference $self but before this reference becomes blessed. So if you want to do the equivalent to override or create an attribute you would say something like
$self->{LIBS} = ['-ldbm -lucb -lc'];
For authors of extensions MakeMaker provides several Makefile targets. Most of the support comes from the ExtUtils::Manifest module, where additional documentation can be found.
reports which files are below the build directory but not in the MANIFEST file and vice versa. (See ExtUtils::Manifest::fullcheck() for details)
reports which files are skipped due to the entries in the MANIFEST.SKIP file (See ExtUtils::Manifest::skipcheck() for details)
does a realclean first and then the distcheck. Note that this is not needed to build a new distribution as long as you are sure that the MANIFEST file is ok.
rewrites the MANIFEST file, adding all remaining files found (See ExtUtils::Manifest::mkmanifest() for details)
Copies all the files that are in the MANIFEST file to a newly created directory with the name $(DISTNAME)-$(VERSION). If that directory exists, it will be removed first.
Additionally, it will create a META.yml module meta-data file and add this to your MANFIEST. You can shut this behavior off with the NO_META flag.
Makes a distdir first, and runs a perl Makefile.PL, a make, and a make test in that directory.
First does a distdir. Then a command $(PREOP) which defaults to a null command, followed by $(TOUNIX), which defaults to a null command under UNIX, and will convert files in distribution directory to UNIX format otherwise. Next it runs tar on that directory into a tarfile and deletes the directory. Finishes with a command $(POSTOP) which defaults to a null command.
Defaults to $(DIST_DEFAULT) which in turn defaults to tardist.
Runs a tardist first and uuencodes the tarfile.
First does a distdir. Then a command $(PREOP) which defaults to a null command. Next it runs shar on that directory into a sharfile and deletes the intermediate directory again. Finishes with a command $(POSTOP) which defaults to a null command. Note: For shdist to work properly a shar program that can handle directories is mandatory.
First does a distdir. Then a command $(PREOP) which defaults to a null command. Runs $(ZIP) $(ZIPFLAGS) on that directory into a zipfile. Then deletes that directory. Finishes with a command $(POSTOP) which defaults to a null command.
Does a $(CI) and a $(RCS_LABEL) on all files in the MANIFEST file.
Customization of the dist targets can be done by specifying a hash reference to the dist attribute of the WriteMakefile call. The following parameters are recognized:
CI ('ci -u')
COMPRESS ('gzip --best')
POSTOP ('@ :')
PREOP ('@ :')
TO_UNIX (depends on the system)
RCS_LABEL ('rcs -q -Nv$(VERSION_SYM):')
SHAR ('shar')
SUFFIX ('.gz')
TAR ('tar')
TARFLAGS ('cvf')
ZIP ('zip')
ZIPFLAGS ('-r')
An example:
WriteMakefile( 'dist' => { COMPRESS=>"bzip2", SUFFIX=>".bz2" })
Long plaguing users of MakeMaker based modules has been the problem of getting basic information about the module out of the sources without running the Makefile.PL and doing a bunch of messy heuristics on the resulting Makefile. To this end a simple module meta-data file has been introduced, META.yml.
META.yml is a YAML document (see http://www.yaml.org) containing basic information about the module (name, version, prerequisites...) in an easy to read format. The format is developed and defined by the Module::Build developers (see http://module-build.sourceforge.net/META-spec.html)
MakeMaker will automatically generate a META.yml file for you and add it to your MANIFEST as part of the 'distdir' target (and thus the 'dist' target). This is intended to seamlessly and rapidly populate CPAN with module meta-data. If you wish to shut this feature off, set the NO_META WriteMakefile() flag to true.
If some events detected in Makefile.PL imply that there is no way to create the Module, but this is a normal state of things, then you can create a Makefile which does nothing, but succeeds on all the "usual" build targets. To do so, use
ExtUtils::MakeMaker::WriteEmptyMakefile();
instead of WriteMakefile().
This may be useful if other modules expect this module to be built OK, as opposed to work OK (say, this system-dependent module builds in a subdirectory of some other distribution, or is listed as a dependency in a CPAN::Bundle, but the functionality is supported by different means on the current architecture).
my $value = prompt($message);
my $value = prompt($message, $default);
The prompt() function provides an easy way to request user input used to write a makefile. It displays the $message as a prompt for input. If a $default is provided it will be used as a default. The function returns the $value selected by the user.
If prompt() detects that it is not running interactively and there is nothing on STDIN or if the PERL_MM_USE_DEFAULT environment variable is set to true, the $default will be used without prompting. This prevents automated processes from blocking on user input.
If no $default is provided an empty string will be used instead.
Command line options used by MakeMaker->new(), and thus by WriteMakefile(). The string is split on whitespace, and the result is processed before any actual command line arguments are processed.
If set to a true value then MakeMaker's prompt function will always return the default without waiting for user input.
ExtUtils::MM_Unix, ExtUtils::Manifest ExtUtils::Install, ExtUtils::Embed
Andy Dougherty <doughera@lafayette.edu>, Andreas König <andreas.koenig@mind.de>, Tim Bunce <timb@cpan.org>. VMS support by Charles Bailey <bailey@newman.upenn.edu>. OS/2 support by Ilya Zakharevich <ilya@math.ohio-state.edu>.
Currently maintained by Michael G Schwern <schwern@pobox.com>
Send patches and ideas to <makemaker@perl.org>.
Send bug reports via http://rt.cpan.org/. Please send your generated Makefile along with your report.
For more up-to-date information, see http://www.makemaker.org.
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
See http://www.perl.com/perl/misc/Artistic.html |
دریافت داده از سایت یا webscraping چیست ؟
نرم افزار هایی ( webscraping ) با ارسال درخواست های مختلف به تجزیه و تحلیل داده هایی که درون پاسخ وجود دارد اقدام می کند
کتابخانه Requests :
برای نصب کتابخونه Requests تنها کافی است دستور زیر را بزنید
pip3 install requests
درخواست GET بسیار ساده تعریف می شود نحوه ارسال درخواست به این صورت می باشد
>>> import requests
>>> response = requests.get('https://api.github.com')
خب از متغیر response برای دسترسی به پاسخ وبسایت استفاده می کنیم برای دریافت کد وضعیت درخواست از کد
>>> print(response.status_code)
استفاده می کنیم برای مشاهده پاسخ سایت سه روش وجود دارد روش اول دریافت به صورت Bytes می باشد که با دستور
>>> response.content
b'{"current_user_url":"https://api.github.com/user"}'
برگردانده می شود داده می شود روش دوم دریافت پاسخ به صورت String می باشد که با دستور
>> response.text
'{"current_user_url":"https://api.github.com/user"}'
مقدار به صورت String برگردانده می شود ، این کتابخونه می تواند نوع کدگذازی پاسخ را متوجه شده و کدگشایی کند اما درصورتی که نوع کدگذاری را به اشتباه پیدا کرده بود می توانید با دستور
>>> response.encoding = 'utf-8'
نوع کدگذاری را تعیین کنید سپس پاسخ را به صورت String دریافت کنید روش سوم دریافت پاسخ از سایت به صورت Json می باشد که با دستور
>>> response.json()
{"current_user_url":"https://api.github.com/user"}
پاسخ درصورتی که به صورت json باشد تبدیل به دیکشنری در پایتون می شود و برگردانده می شود. یکی دیگر از تابع های پرکاربرد این کتابخونه تابع headers است که اطلاعات مفیدی درباره Content-type ، مدت زمانی که پاسخ توسط سرور نگه داشته می شود چه قدر است و … با دستور
>>> response.headers
{'Server': 'GitHub.com', 'Date': 'Mon, 10 Dec 2018 17:49:54 GMT', .....}
این مقدار ها به شما بازگردانده می شود همچنین برای دریافت تنها یک بخش از Header می توانید از دستور زیر استفاده کنید
>>> response.headers['Server']
GitHub.com
در متود get می توانیم پارامتر ها را درون لینک قرار دهیم اما برای اصولی تر شدن کار از دیکشنری به نام params درون ارسال درخواست اسفتاده می کنیم
>>> response = requests.get('https://api.github.com/search/repositories', params={'q': 'requests+language:python'})
همچنین می توایند پارامتر ها را به صورت Bytes هم ارسال کنید
>>> response = requests.get('https://api.github.com/search/repositories', params=b'q = requests+language:python')
برای تنظیم Header دیکشنری همانند پارامتر می نویسید به نام headers
>>> response = requests.get('https://api.github.com/search/repositories', params={'q': 'requests+language:python'},headers={'Accept': 'application/vnd.github.v3.text-match+json'})
درضمن درون Header با استفاده از دستور زیر می توانیم User-agent تایین کنیم
>>> my_user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.47 Safari/537.36'
>>> response = requests.get('https://api.github.com/search/repositories', params={'q': 'requests+language:python'},headers={'Accept': 'application/vnd.github.v3.text-match+json','user-agent':my_user_agent})
تمام تابع های ذکر شده برای متود Post نیز قابل استفاده است برای ارسال Post باید دستور زیر را وارد کرد
>>> requests.post('https://httpbin.org/post', data={'key':'value'})
دیگر متود هایی که کتابخونه Requests پشتیبانی می کند
>>> requests.put('https://httpbin.org/put', data={'key':'value'})
>>> requests.delete('https://httpbin.org/delete')
>>> requests.head('https://httpbin.org/get')
>>> requests.patch('https://httpbin.org/patch', data={'key':'value'})
>>> requests.options('https://httpbin.org/get')
کتاب خانه beautifulsoup4 :
برای نصب bs4 کتابخونه از دستور زیر استفاده می کنیم
pip3 install beautifulsoup4
برای دریافت کد های Html اول باید یک درخواست Get توسط کتابخونه Request بفرستیم
>>> import requests
>>> response = requests.get("http://www.example.com")
خب سپس باید بازخوردی که دریافت کردیم ( صفحه Html ) قابل تجزیه کنیم با دستور زیر این عمل را انجام می دهیم
>>> from bs4 import BeautifulSoup as BS
>>> soup = BS(response.content,'html.parser')
خب حال می خواهیم نوع داده هایی که درون متغیر soup را ببینیم ، children. عنصر را به عنصر های کوچک تر تبدیل می کند
>>> [type(item) for item in list(soup.children)]
[bs4.element.Doctype, bs4.element.NavigableString, bs4.element.Tag]
خب اولین عنصر این لیست هاوی نوع فایل است ، عنصر دوم متنی است که به عنوان جدا کننده می شناسد که گاها n می باشد اما عنصر سوم عنصری است که ما می خواهیم با آن کار کنیم حال باید عنصر سوم که هاوی کد های html است را به عناصری کوچک تر تبدیل کنیم
>> html = list(soup.children)[2]
حالا که تگ ها html صفحه را در اختیار داریم می خواهیم روش های مختلف انتخاب تگ را بررسی می کنیم . خب در صورتی که بخواهیم تمام تگ های مد نظر را انتخاب کنیم از دستور زیر استفاده می کنیم برای مثال ما از درون صفحه تمام تگ های a را انتخاب می کنیم
html.find_all('a')
توجه داشته باشید به دلیل وجود چند تگ a درون صفحه نوع پاسخ این تابع به صورت List است حالا در صورتی که بخواهیم یک تگ را انتخاب کنیم با خصوصیات مشخص از تابع زیر استفاده می کنیم
>>> html.find("input",{"id":"input1"})
این تابع ورودی ( input ) که id آن برابر input1 است را انتخاب می کند حال می خواهیم مقدار این ورودی را دریافت کنیم تنها کافی است از این دستور استفاده کنیم
>>> html.find("input",{"id":"input1"})["value"]
در صورتی که جای value هر یک از ورودی های استفاده شده در تگ انتخاب شده را قرار دهید مقدار آن را بر میگرداند . ما نیز می توانیم همانند انتخاب تگ در css تگ مورد نظر را انتخاب کنیم
کتابخانه Selenium :
این کتاب خانه در بیشتر زبان های برنامه نویسی وجود دارد اگر بخواهیم به طور کاملا خلاصه این کتاب خانه را معرفی کنیم می توانیم بگوییم این کتابخانه عرضه شده جهت ارتباط زبان برنامه نویسی با مرورگر ( ربات نویسی ) .
خب برای اینکه بتوانیم با مرورگر ارتباط بگیریم نیاز به یک برنامه رابط داریم نام این رابط webdrive است.
خب webdrive ها برای هر مرورگر متفاوت است شما می توانید برای مرورگر Chorm, Edge, Firefox, Safari وب درایور مخصوصشان را دانلود کنید لینک صفحه وب درایو مخصوص هر مرورگر را اینجا قرار می دهیم
Chrome: https://sites.google.com/a/chromium.org/chromedriver/downloads
Edge: https://developer.microsoft.com/en-us/microsoft-edge/tools/webdriver/
Firefox: https://github.com/mozilla/geckodriver/releases
Safari: https://webkit.org/blog/6900/webdriver-support-in-safari-10/
خب ما در این آموزش از geckodrive استفاده می کنیم این درایو مخصوص FireFox است . geckodrive هم برای ویندوز هم برای لینوکس عرضه شده است نصب در هر دو همانند یک دیگر است مراحل آماده سازی درایو به این صورت است
۱. دانلود فایل پرس شده مخصوص سیستم عامل
۲. استخراج فایل
۳. قرار دادن فایل geckodrive در پوشه ای که پروژه قرار دارد
حالا میخواهیم یک صفحه باز کنیم ( در لینوکس executable_path را به صورت ./geckodriver و در ویندوز geckodrive.exe برای ادرس دهی استفاده می کنیم توجه داشته باشید تعریف executable_path واجب نمی باشد در صورت کنار هم بودن درایو مورد نظر تنها صدا زدن نام درایو کافیست)
>> from selenium import webdriver
>> from selenium.webdriver.common.keys import Keys
>> from time import sleep
>> browser = webdriver.Firefox(executable_path='./geckodriver')
>> browser.set_window_size(900,900)
>> browser.get("https://pythons.ir/")
>> browser.find_element_by_name("s").send_keys("selenium")
>> sleep(5)
>> browser.close()
در مثال بالا در خط ۴ با استفاده از ()webdrive.Firefox گفتیم که webdrivex ما برابر با فایرفاکس و محل خود فایل در همانجا به نام geckodriver است همچنین می توانیم از نوشتن آن امتناع کنیم. خط بعدی امده ایم اندازه صفحه را بر حسب پیکس ابعاد داده ایم سپس در خط بعد با دستور get گفتیم یک درخواست به صفحه اصلی pythons.ir بفرستد سپس به دنبال تگی گشتیم که name آن برابر s است پس از پیدا کردن با دستور send_keys مقدار selenium را درون آن قرار می دهد و اجرا می کند سپس با استفاده از sleep گفته ایم ۵ ثانیه منتظر بمان و بعد مرورگر را ببند
نحوه انتخاب تگ ها :
انتخاب تگ با استفاده از id
my_tag = browser.find_element_by_id("id")
انتخاب تگ با استفاده از name
my_tag = browser.find_element_by_name("name")
انتخبا تگ با استفاده از xpath
xpath زبانی است که برای یافتن تگ ها در یک سند XML استفاده می شود. از آنجا که HTML به XML نزدیک است باشد ، در نتیجه ما می توانند از این زبان قدرتمند برای هدف قرار دادن تگ ها استفاده کنیم ، برای مثال می خواهیم یک تگ form را انتخاب کنیم که id آن برابر loginform باشه به مثال زیر دقت کنید.
my_tag = browser.find_element_by_xpath("//form[@id='loginForm']")
انتخاب توسط link_text
این تابع تمام تگ های لینک را انتخاب می کند و سپس می توانیم تایین کنیم تنها تگی که متن آن برابر متن مورد نظر ما است را برگرداند
my_tag = browser.find_element_link_text("hello")
انتخاب تگ توسط class
my_tag = browser.find_element_by_class_name("hello")
انتخاب تگ مانند انتخاب تگ در css
my_tag = driver.find_element_by_css_selector('div#p')
منتظر ماندن ، خارج شدن :
ما می توانیم درون webdrive تایین کنیم در صورتی که تگ مورد نظر ما در زمان معین شده پیدا نشد ارور داده و از حالت لودینگ خارج شود و دستور بعدی را اجزا کند به مثال زیر دقت کنید
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Firefox()
driver.get("https://pythons.ir/")
try:
element = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, "myDynamicElement")))
element.click()
finally:
driver.quit()
خب در مثال بالا سه تابع جدید import کردیم اولین تابع By است که می توانیم تایین کنیم بر چه اساس می خواهیم تگ را انتخاب کنیم نحوه استفاده از این تابع با رنک قرمز در بالا نشان داده شده. و تابع بعدی که import شده است از support.ui تابع WebDriverWait است همانطور که از نام این تابع مشخص مربوط به دریافت زمان و مدت زمان منتظر ماندن است که دو ورودی میگیرید یکی driver و دیگری مدت زمان انتظار به ثانیه و سپس از تابع until استفاده کردیم که به معنی تا زمانی که را می دهد درون until از EC استفاده کردیم همانطور که در خط ۴ مشاهده می کنید از webdriver.support تابع expected_conditions به معنی شرط مورد انتظار است را به عنوان EC اضافه کردیم ، از EC تابع presence_of_element_located را صدا زدیم به این معنی که در صورتی که تگ myDynamicElement را پیدا کرد مقدار تگ را بر گرداند و بر روی آن کلیک کند در غیر این صورت اروری را پرتاب می کند. در مثالی که بالا نوشته ایم تگ پیدا نمیشود در نتیجه درایو بسته می شود.
توجه داشته باشید که تابع By تنها از طریق ID اقدام به پیدا کردن تگ نمی کند شما می توانید از طریق name, class_name و … تگ را پیدا کنید .انواع پیدا کردن تگ توسط By :
توسط id
By.ID
توسط name
By.NAME
توسط نام کلاس
By.CLASS_NAME
توسط Xpath
By.XPATH
شرط های مورد انتظار :
می توانیم به جای presence_of_element_located در مثال بالا از شرط های مورد انتظار دیگری نیز استفاده کنیم لیست شرط های قابل دسترس :
در صورت وجود title
title_is
در صورت وجود بدنه
title_contains
درصورت نمایان شدن تگ مورد نظر
visibility_of_element_located
در صورت نماین شدن
visibility_of
وجود تمام تگ های مشخص شده
presence_of_all_elements_located
در صورت وجود متن مشخص شده در تگ
text_to_be_present_in_element
در صورت وجود متن مورد نظر در مقدار یک تگ
text_to_be_present_in_element_value
در صورت وجود frame که بتوان بر روی آن عملی را انجام دهیم
frame_to_be_available_and_switch_to_it
در صورت نامرئی بودن تگ
invisibility_of_element_located
در صورتی که تگ مورد نظر قابلیت کلیک کردن داشته باشد
element_to_be_clickable
در صورتی که تگ انتخاب شده در حالت … باشد
element_selection_state_to_be
وضعیت تگ از بخش مد نظر در حالت … باشد
element_located_selection_state_to_be
در صورتی که popup جاوا اسکریپتی اجرا شود
alert_is_present
دسترسی های Selenium :
دسترسی به popup ها :
نکته قابل توجه ای که selenium دارد قابلیت دسترسی به popup های جاوااسکریپتی (alert, prompt) نیز می باشد با استفاده از شرط های مورد انتظار می توانیم به popup دسترسی پیدا کنیم به مثال زیر دقت کنید
from time import sleep
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Firefox()
driver.get("https://pythons.ir/")
try:
WebDriverWait(driver, 10).until(EC.alert_is_present())
prompt = driver.switch_to_alert()
print(prompt.text)
prompt.send_keys("ok")
user_ans = int(input("Enter 1 if you want to accept and if you want to dismiss enter 2 >>>"))
if(user_ans == 1):
prompt.accept()
else:
prompt.dismiss()
finally:
sleep(3)
driver.quit()
در مثال بالا به درایو گفته ایم ۱۰ ثانیه وایسا تا زمانی که popup نمایان شود اگر نمایان شد پس اروری رخ نمی دهد و شرط ادامه پیدا می کند درخط بعد گفته ایم که بر روی alert متمرکز شود (توجه داشته باشید که منظور از alert همان popup است) و آن را برابر متغییر prompt قرار بده و متن آن را پرینت کن و از کاربر بخواه که عدد ۱ را درصورتی که می خواهد prompt را تایید کند و ۲ را در صورتی که می خواهد prompt را کنسل کند و در آخر گفته ایم برای ۳ ثانیه منتظر بمان و سپس درایو را ببند
دسترسی به تاریخچه و موقعیت :
پیش تر به دستور driver.get اشاره ای داشتیم همانطور که می دانید هنگامی که شما صفحه ای را باز می کنید درون تاریخچه مرورگر ذخیره می شود در کتابخانه selenium می توانیم به این تاریخچه ها دسترسی داشت باشیم اما به صورت موقت. در نتیجه می توانید با دو دستور driver.forward و driver.back به موقعیت قبلی بروید
دسترسی به کوکی :
یکی از دسترسی های پر کاربرد selenium دسترسی به کوکی ها می باشد . نجوه تنظیم کردن کوکی در selenium به این صورت است که اول باید یک دیکشنری تعریف کنیم سپس با دستور add_cookie کوکی را اضافه می کنیم و برای دریافت کوکی ا ز دستور get_cookie استفاده می کنیم. توجه داشته باشید path هر کوکی بابر موقعیت فعلی درایو است
from selenium import webdriver
from time import sleep
driver = webdriver.Firefox()
driver.get("https://pythons.ir")
cookie = {‘name’ : ‘foo’, ‘value’ : ‘bar’}
driver.add_cookie(cookie)
driver.get_cookies()
sleep(3)
driver.close()
دسترسی به دیگر صفحات :
همانطور که پیشتر در مثال دسترسی به popup ها دیدید که ما بر روی alert متمرکز شدیم می توانیم همین عمل را برای دیگر صفحهات یا تگ های frame انجام دهیم یعنی بر روی دیگر صفحات سویچ کنیم
from selenium import webdriver
from time import sleep
driver = webdriver.Firefox()
driver.get("https://pythons.ir")
driver.switch_to_window("windowName")
sleep(3)
driver.close()
دسترسی به سورس صفحه :
با استفاده از تابع page_source می توانیم به کد html صفحه دسترسی پیدا کنیم و سپس
from selenium import webdriver
from time import sleep
driver = webdriver.Firefox()
driver.get("https://pythons.ir")
html = driver.page_source
f = open("index.html","w")
f.write(html)
f.close()
sleep(3)
driver.close()
تعامل با صفحه در کتابخانه selenium :
انتخاب مقداری از تگ Select :
پیش تر نحوه انتخاب تگ را یاد گرفتیم حال میخواهیم انتخاب یک Select و انتخاب value را آموزش دهیم برای کار با تگ Select تابعی به نام Select وحود دارد. به مثال زیر دقت کنید
from selenium import webdriver
from selenium.webdriver.support.ui import Select
from time import sleep
driver = webdriver.Firefox()
driver.get("https://pythons.ir")
select = Select(driver.find_element_by_name('name'))
print(select.options)
select.select_by_index(1)
select.select_by_visible_text("English")
select.select_by_value("en")sleep(3)
driver.close()
در مثال بالا ما تگی که نام ان برابر name است را با استفاده از متد های find انتخاب کردیم سپس درون تابع Select قرار دادیم این تابع انتخاب option ها را برای ما در دسترس تر خواهد کرد سپس با دستور select.options تمام مقادیر تگ Select را پرینت می کندما به سه روش می توانیم option مورد نیاز را انتخاب کنیم روش اول با دادن index ( شماره ترتیب قرار گرفتن option مورد نظر ) بر اسا نوشته option یا بر اسا مقداری که option همراه خود دارد.
ارسال اطلاعات فرم :
یکی از راه های ارسال اطلاعات فرم کلیک بر روی دکمه submit است برای این کار دکمه Submit را انتخاب کرده و سپس تابع click را صدا می کنیم این عمل همانند کلیلک کردن بر روی دکمه ارسال داده فرم می باشد مثال زیر دقت کنید
from selenium import webdriver
from selenium.webdriver.support.ui import Select
from time import sleep
driver = webdriver.Firefox()
driver.get("https://pythons.ir")
btn_submit = driver.find_element_by_id('submit')
btn_submit.click()
sleep(3)
driver.close()
اما در بعضی از فرم ها هیچ دکمه Submit وجود ندارد و تنها با فشردن کلید Enter عمل می کند مانند فرم های سرچ برای اینکار از common تابع keys را به ضفحه اضافه می کنیم سپس ورودی مورد نظر را انتخاب می کنیم و زدن Enter را شبیه سازی می کنیم به مثال زیر دقت کنید
from selenium import webdriver
from selenium.webdriver.support.ui import Select
from time import sleep
driver = webdriver.Firefox()
driver.get("https://pythons.ir")
search_input = driver.find_element_by_id('s')
search_input.send_keys("hello")
search_input.send_keys(u'ue007') sleep(3) driver.close()
در مثال بالا ما ورودی جست و جو را پیدا کرده و درون متغیر search_input قرار دادیم سپس مقدار hello را درون ورودی نوشته و سپس با همان تابع send_keys و با استفاده از api key کد دکمه Enter را وارد کردیم در ادامه کد کلید های خاص را مشاهده می کنید
ALT = u'ue00a'
ARROW_DOWN = u'ue015'
ARROW_LEFT = u'ue012'
ARROW_RIGHT = u'ue014'
ARROW_UP = u'ue013'
BACKSPACE = u'ue003'
BACK_SPACE = u'ue003'
CANCEL = u'ue001'
CLEAR = u'ue005'
COMMAND = u'ue03d'
CONTROL = u'ue009'
DECIMAL = u'ue028'
DELETE = u'ue017'
DIVIDE = u'ue029'
DOWN = u'ue015'
END = u'ue010'
ENTER = u'ue007'
EQUALS = u'ue019'
ESCAPE = u'ue00c'
F1 = u'ue031'
F10 = u'ue03a'
F11 = u'ue03b'
F12 = u'ue03c'
F2 = u'ue032'
F3 = u'ue033'
F4 = u'ue034'
F5 = u'ue035'
F6 = u'ue036'
F7 = u'ue037'
F8 = u'ue038'
F9 = u'ue039'
HELP = u'ue002'
HOME = u'ue011'
INSERT = u'ue016'
LEFT = u'ue012'
LEFT_ALT = u'ue00a'
LEFT_CONTROL = u'ue009'
LEFT_SHIFT = u'ue008'
META = u'ue03d'
MULTIPLY = u'ue024'
NULL = u'ue000'
NUMPAD0 = u'ue01a'
NUMPAD1 = u'ue01b'
NUMPAD2 = u'ue01c'
NUMPAD3 = u'ue01d'
NUMPAD4 = u'ue01e'
NUMPAD5 = u'ue01f'
NUMPAD6 = u'ue020'
NUMPAD7 = u'ue021'
NUMPAD8 = u'ue022'
NUMPAD9 = u'ue023'
PAGE_DOWN = u'ue00f'
PAGE_UP = u'ue00e'
PAUSE = u'ue00b'
RETURN = u'ue006'
RIGHT = u'ue014'
SEMICOLON = u'ue018'
SEPARATOR = u'ue026'
SHIFT = u'ue008'
SPACE = u'ue00d'
SUBTRACT = u'ue027'
TAB = u'ue004'
UP = u'ue013' |
私はsmlが初めてです。特定の番号を持つ5つの位置の配列を受け取り、すべての番号を含む最小のサブ配列の長さを返す単純なコードを作成しようとしています。ただし、Googleで見つけることができない多くのエラーメッセージが表示されます。誰も私を助けることができますか?コードは次のとおりです
fun Min x y = if x>y then return y else return x
local
val a = Array.array (3,0)
val cordela = Array.array(5,0)
val k=0
val front=0
val tail=0
val min=5
update(cordela,0,1)
update(cordela,1,3)
update(cordela,2,3)
update(cordela,3,2)
update(cordela,4,1)
in
fun loop front =
case k>3 of
if sub(a,sub(cordela,front)-1) = 0 then k=k+1 else()
update(a,sub(cordela,front)-1),sub(a,sub(cordela,front)-1)+1)
front = front +1
|
min= Min (front-tail) min
if sub(a,sub(cordela,front)-1) = 0 then k=k-1 else()
update(a,sub(cordela,front)-1),sub(a,sub(cordela,front)-1)-1)
tail=tail+1
if 5>front then loop front+1 else min
end
表示されるエラーメッセージ:
pl2.sml:16.13-16.15 Error: syntax error: replacing OF with LBRACKET
pl2.sml:18.36 Error: syntax error: inserting LPAREN
pl2.sml:20.4 Error: syntax error: replacing BAR with EQUALOP
pl2.sml:22.5 Error: syntax error: inserting LPAREN
pl2.sml:26.4 Error: syntax error: inserting LPAREN
pl2.sml:27.2 Error: syntax error found at END
編集:このコードをsmlで記述しようとしています。 C ++で記述されています
while(front < N){
if( k < K ){
if ( e[cordela[front]-1] == 0 ) k += 1;
e[cordela[front]-1] +=1;
front++ ;
}
else{
min = MIN(front - tail ,min);
if ( e[cordela[tail]-1] ==1 ) k -= 1;
e[cordela[tail]-1] -= 1;
tail++;
}
}
解決した方法 # 1
John Colemanが言うように、SML/NJはあまり有用なエラーメッセージを出しません。より良いエラーメッセージが表示されるため、代わりにMoscow MLを試してインストールできます。残念ながら、構文レベルでのこのコードにはいくつかの問題があり、コンパイラーが意味のあるエラーを出すのを難しくしています。アルゴリズムの問題に集中できるように、構文を正しくするためのヒントを次に示します。
localを使用しないでください 、letを使用 。
各
(を一致させる と);)が多すぎます s。fun loop ... = ...を宣言する 内部letおよびin。
一度実行すると、問題を解決する関数のテンプレートは次のようになります。
fun smallest_subarray (needles : Array.array, haystack : Array.array) =
let
val ... = ...
fun loop ... = ...
in
if Array.length needles > Array.length haystack
then ...
else loop ...
end
問題の解決策がない場合、関数は何を返しますか?
~1?NONE
C ++プログラムをSMLに変換しようとしている場合、どの識別子が関数の引数であるかが明確になるように関数部分を含めて、論理的に名前を付けます。
cordelaが何なのかわからない 、eおよびkあり、またはNの場合 入力配列のサイズの関数、または定数です。
SMLの慣用的なソリューションでは、反復(
while)ではなく再帰(関数を呼び出す関数)が使用されるため )、あなたは非自明なアルゴリズムの問題と別のパラダイムの両方を扱っています。代わりに、アルゴリズムはより単純で、再帰パラダイムを適用する同様の、しかしより単純な問題を解決してみてください。
たとえば、バイナリ検索を使用して、ソートされた配列内の要素の位置を見つける関数を作成してみてください:
fun find x arr =
let
fun loop ... = ...
in
loop ...
end
loop関数は検索範囲を取ります(例:iおよびj)引数として、いずれかのSOME iを返すxの場合iの位置にある 、またはNONE。入力配列needlesかどうかを判別する関数を作成しようとすると、元の問題の方向にこの問題を拡張できます。 、別の入力配列haystackで発生 、needlesで指定された順序で 。最初にneedlesと仮定できます およびhaystackソートされた後、そうではないと想定します。 |
In a recent blog post I briefly discussed how to build, export and run a service packaged via a Habitat plan.
In this post we will take a look at running Redis and backing it up via Shield.
Running Redis
To play around with the starkandwayne/redis release you can bring it up in the habitat studio:
$ hab studio enter
[1][default:/src:0]# hab svc load starkandwayne/redis
(...)
[2][default:/src:127]# hab pkg binlink starkandwayne/redis [4/1836]
» Symlinking redis-check-rdb from starkandwayne/redis into /bin
★ Binary redis-check-rdb from starkandwayne/redis/3.2.8/20170522110804 symlinked to /bin/redis-check-rdb
» Symlinking redis-server from starkandwayne/redis into /bin
★ Binary redis-server from starkandwayne/redis/3.2.8/20170522110804 symlinked to /bin/redis-server
(...)
[2][default:/src:0]# /bin/redis-cli -a password SET hello world
OK
[3][default:/src:0]# /bin/redis-cli -a password GET hello
"world"
Typing sl will give you the log output of the background supervisor that got started when you entered the studio:
[4][default:/src:0]# sl
--> Tailing the Habitat Supervisor's output (use 'Ctrl+c' to stop)
redis.default(O): | `-._`-._ _.-'_.-' |
redis.default(O): `-._ `-._`-.__.-'_.-' _.-'
redis.default(O): `-._ `-.__.-' _.-'
redis.default(O): `-._ _.-'
redis.default(O): `-.__.-'
redis.default(O):
redis.default(O): 168:M 22 May 13:11:55.082 # Server started, Redis version 3.2.8
redis.default(O): 168:M 22 May 13:11:55.082 * The server is now ready to accept connections on port 6379
(...)
Running Shield daemon
Since Shield is a bit more complex system with a few moving parts I will run it via the pre-exported docker images in docker-compose.
First lets bring up the shield-daemon connected to a database. The daemon is the main coordinator of shield. It triggers backups as needed and persists the state of all created archives and backup jobs.
$ mkdir redis-hab-demo && cd redis-hab-demo $ cat <<EOF > docker-compose.yml
version: '3'
services:
shield:
ports:
- 443:443
image: starkandwayne/shield
command: "start starkandwayne/shield --peer database --bind database:postgresql.shield"
links:
- database
database:
image: starkandwayne/postgresql
command: "start starkandwayne/postgresql --group shield"
EOF
docker-compose up
You can use the shield cli to interact with the daemon. Download it from the github-release.
From another terminal:
$ shield create-backend hab https://localhost
Successfully created backend 'hab', pointing to 'https://localhost'
Using https://localhost (hab) as SHIELD backend
$ export SHIELD_API_TOKEN=autoprovision
To actually backup a system you need to create a few entities in shield such as a policy, schedule and store. Lets create a schedule that takes a backup every day at 4am via the cli:
$ shield create-schedule -k
Schedule Name: daily
Summary:
Time Spec (i.e. 'daily 4am'): daily 4am
Schedule Name: daily
Summary:
Time Spec (i.e. 'daily 4am'): daily 4am
Really create this schedule? [y/n] y
Created new schedule
Name: daily
Summary:
Timespec: daily 4am
$ shield schedules -k
Name Summary Frequency / Interval (UTC)
==== ======= ==========================
daily daily 4am
Because creating all entities manually is error prone we can also automate it by using the shield-agent.
Running Shield agent
The shield-agent is another component of Shield which is typically co-located with the data store you want to backup. You can configure it to automatically provision the elements that shield needs to run a backup.
Stop the docker-compose system via:
docker-compose stop && docker-compose rm -f
Use an EDITOR to add the agent to the docker-compose file. Add the agent service under the already existing services: key:
services:
agent: # to autoprovision the dependant entities
image: starkandwayne/shield-agent
command: "start starkandwayne/shield-agent --bind daemon:shield.default --peer database"
environment:
HAB_SHIELD_AGENT: |
[[stores]]
name='local'
plugin='fs'
[stores.config]
base_dir='/backups'
[schedules]
daily='daily 4am'
[retention-policies]
shortterm='86400'
links:
- database
Bring it up and lets see if it worked:
$ docker-compose up
Once everything is runnin you can see the configured entities in another terminal:
$ shield policies -k
Name Summary Expires in
==== ======= ==========
shortterm 1 days
$ shield stores -k
Name Summary Plugin Configuration
==== ======= ====== =============
local fs {
"base_dir": "/backups"
}
Excellent we have now automatically configured a store. For the demo we are using the fs plugin to store backups in a local folder (/backups). In production you would want to use a plugin that can store the backups on a cloud based object store like s3.
Auto-configuring Redis
Now that we have a schedule, policy and store in place we can bring up Redis and have it automatically configure Shield to run backups.
Again stop the running system:
docker-compose stop && docker-compose rm -f
And add Redis to the docker-compose.yml. Again the redis service belongs under the already existing services: key. The volumes key new:
services:
redis:
image: starkandwayne/redis:edge
volumes:
- backups-volume:/backups
ports:
- 6379:6379
command: "start starkandwayne/redis --peer shield --bind shield:shield.default"
environment:
HAB_REDIS: |
bootstrap_from_backup=true
backups_schedule='daily'
backups_retention='shortterm'
backups_store='local'
links:
- shield
volumes:
backups-volume: {}
Bring it up and have a look:
$ docker-compose up
It can take a while for the whole system to come up but eventually you should see:
% shield jobs -k
Name P? Summary Retention Policy Schedule Remote IP Target
==== == ======= ================ ======== ========= ======
redis-default N shortterm daily 172.27.0.5:5444 {
"base_dir": "/hab/svc/redis/data"
}
So the Redis service we just added was able to configure its own backup job just by binding to a running Shield daemon. Cool!
Lets write a value, take a backup and see if it works:
$ redis-cli -a password SET hello world
OK
$ shield run redis-default -k
Scheduled immediate run of job
To view task, type shield task f82752ae-8066-4bca-9c71-47dc35464c80
$ shield archives -k
UUID Target Restore IP Store Taken at Expires at Status Notes
==== ====== ========== ===== ======== ========== ====== =====
fb2b2b0b-925b-4e69-8083-ab649760048e redis-default (fs) 192.168.16.5:5444 default (fs) Tue, 16 May 2017 13:29:02 +0000 Wed, 17 May 2017 13:29:02 +0000 valid
So we set a value and manually took a backup. Lets destroy and recreate the Redis service. Thanks to the auto-bootstrapping feature the value should be restored without any further input:
$ docker-compose stop redis && docker-compose rm -f redis
Stopping hab_redis_1 ... done
Going to remove hab_redis_1
Removing hab_redis_1 ... done
$ docker-compose up -d redis
hab_database_1 is up-to-date
hab_agent_1 is up-to-date
hab_shield_1 is up-to-date
$ until redis-cli -a password GET hello; do echo 'Waiting for redis to bootstrap'; sleep 1; done
Waiting for redis to bootstrap
Waiting for redis to bootstrap
Waiting for redis to bootstrap
Waiting for redis to bootstrap
"world"
So thanks to Shield and Habitat's binding feature we are very easily able to add arbitrary Redis services all with backups preconfigured. |
之前的一篇博客专门介绍了神经网络的搭建,是在python环境下基于numpy搭建的,之前的numpy版两层神经网络,不能支持增加神经网络的层数。最近看了一个介绍tensorflow的视频,介绍了关于tensorflow的构建神经网络的方法,特此记录。
tensorflow的构建封装的更加完善,可以任意加入中间层,只要注意好维度即可,不过numpy版的神经网络代码经过适当地改动也可以做到这一点,这里最重要的思想就是层的模型的分离。
import tensorflow as tf
import numpy as np
def addLayer(inputData,inSize,outSize,activity_function = None):
Weights = tf.Variable(tf.random_normal([inSize,outSize]))
basis = tf.Variable(tf.zeros([1,outSize])+0.1)
weights_plus_b = tf.matmul(inputData,Weights)+basis
if activity_function is None:
ans = weights_plus_b
else:
ans = activity_function(weights_plus_b)
return ans
x_data = np.linspace(-1,1,300)[:,np.newaxis] # 转为列向量
noise = np.random.normal(0,0.05,x_data.shape)
y_data = np.square(x_data)+0.5+noise
xs = tf.placeholder(tf.float32,[None,1]) # 样本数未知,特征数为1,占位符最后要以字典形式在运行中填入
ys = tf.placeholder(tf.float32,[None,1])
l1 = addLayer(xs,1,10,activity_function=tf.nn.relu) # relu是激励函数的一种
l2 = addLayer(l1,10,1,activity_function=None)
loss = tf.reduce_mean(tf.reduce_sum(tf.square((ys-l2)),reduction_indices = [1]))#需要向相加索引号,redeuc执行跨纬度操作
train = tf.train.GradientDescentOptimizer(0.1).minimize(loss) # 选择梯度下降法
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
for i in range(10000):
sess.run(train,feed_dict={xs:x_data,ys:y_data})
if i%50 == 0:
print sess.run(loss,feed_dict={xs:x_data,ys:y_data})
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.