ãªãŒãã³ãœãŒã¹ã®é³å£°èªèã¢ãã«ã®Whisperã䜿ããšãæè»œã«é«å質ãªé³å£°èªèïŒæåèµ·ããïŒãå¯èœãšãªããä»åã¯ãWhisperãå©çšããŠç°¡åã«äœ¿ãããªã¢ã«ã¿ã€ã é³å£°èªèããŒã«ãäœã£ãŠã¿ããã
é³å£°èªèã¢ãã«ã®Whisperãšã¯
ãWhisperãã¯ãChatGPTã§æåãªOpenAIãå ¬éããŠãããªãŒãã³ãœãŒã¹ã®é³å£°èªèã¢ãã«ã ãé«ç²ŸåºŠãªé³å£°èªèã¢ãã«ã§ãè±èªã ãã§ãªãæ¥æ¬èªãå«ããå€èšèªã®é³å£°ãããã¹ãã«å€æã§ããããã€ãºã®å€ãç°å¢ã§ãé«ãèªè粟床ãèªããè°äºé²äœæãåå¹çæãèªåæåèµ·ãããªã©ã«æŽ»çšãããŠããã
Pythonããç°¡åã«æ±ããç¹ãé åã§ãæè»ãªå¿çšãå¯èœãšãªã£ãŠãããããã§ãä»åã¯ãPythonã§ãªã¢ã«ã¿ã€ã ã®é³å£°èªèããŒã«ãäœã£ãŠã¿ããã
é³å£°èªèã«äœ¿ãã©ã€ãã©ãªãã€ã³ã¹ããŒã«ããã
ã¿ãŒããã«(WindowsãªãPowerShellãmacOSãªãã¿ãŒããã«.app)ãèµ·åããŠã次ã®ã³ãã³ããå®è¡ããããããã§ã¯ãPythonã®ä»®æ³ç°å¢venvãå©çšããŠãã©ãã«ãé¿ãã€ã€ãç°å¢ãæ§ç¯ããã
# (1) venvã§Pythonä»®æ³ç°å¢ãäœæãã
python -m venv venv
# (2) venvãæå¹ã«ããã - Windowsã®å Žå
Set-ExecutionPolicy RemoteSigned -Scope CurrentUser # ããŒã«ã«ã§PSãæå¹ã«
.\venv\Scripts\Activate.ps1 # ç°å¢ãæå¹ã«
# (2) venvãæå¹ã«ããã - macOS/Linuxã®å Žå
source venv/bin/activate
brew install libsndfile # ã©ã€ãã©ãªãã€ã³ã¹ããŒã«
# (3) ã©ã€ãã©ãªã®ã€ã³ã¹ããŒã«
pip install transformers torchaudio sounddevice soundfile
ãªããçè ã¯Pythonã®ããŒãžã§ã³ã¯3.12.10ã§ç¢ºèªããããããããŸãããã°ã©ã ãåããªãå ŽåãPythonã®ããŒãžã§ã³ãåãæ¿ããŠå®è¡ãããšè¯ãã ããã
ãã®æã®ã©ã€ãã©ãªã¯ãPythonã®ããŒãžã§ã³ãææ°ã®ãã®ã«æŽæ°ãããšãåããªããªãå Žåãå€ããåçš¿å·çæç¹ã§ãææ°ã®Pythonã®å®å®çã¯ã3.13.5ã ããAIé¢é£ã®ã©ã€ãã©ãªãã€ã³ã¹ããŒã«ããŠå©çšããå Žåã¯ããããã0.1ã0.2ãåŒããããŒãžã§ã³(3.12.xã3.11.x)ãå©çšãããšè¯ãã
å ããŠãWindowsã§ã¯ãFFmpegã®ã€ã³ã¹ããŒã«ãå¿ é ãšãªã£ãŠããããã¡ã( https://github.com/BtbN/FFmpeg-Builds/releases )ãããWindowsçšã®ãã€ããªãããŠã³ããŒãããŠãç°å¢å€æ°PATHã«binãã©ã«ãã远å ããå¿ èŠãããã以äžã®æé ã§äœæ¥ãããã
1. äžèšãããffmpeg-master-latest-win64-gpl-shared.zipããéžãã§ããŠã³ããŒãããããZIPãã¡ã€ã«ãè§£åããŠãã¢ãŒã«ã€ãã®å
容ããäŸãã°ãc:Â¥ffmpeg以äžã«ã³ããŒãããã
2. Windowsã¡ãã¥ãŒãã¯ãªãã¯ããŠãç°å¢å€æ°ããæ€çŽ¢ããŠãç°å¢å€æ°ã®ç·šéããã«ã衚瀺ããã
3 . ãç°å¢å€æ°ãã®ç·šéã«ãŠãPathã®é
ç®ãããã«ã¯ãªãã¯ããŠãããã«ãc:Â¥ffmpegÂ¥binããšè¿œèšããŠãOKããã¿ã³ãæŒã
äžçªç°¡åãªããã°ã©ã ã詊ããŠã¿ãã
ããŠãç¡äºã«ã©ã€ãã©ãªãã€ã³ã¹ããŒã«ã§ããããæ¬¡ã«ãããã°ã©ã ãäœã£ãŠãããããªã¢ã«ã¿ã€ã ã®é³å£°èªèããŒã«ãäœãäžã§ãå¿ èŠãšãªãã®ããé²é³ããããšãšãé³å£°èªèãè¡ãããšã ã
æåã«PCã®ãã€ã¯ã®é³ãæŸã£ãŠãé²é³ããããã°ã©ã ãäœã£ãŠã¿ããã以äžã®ããã°ã©ã ã¯ãããã°ã©ã ãå®è¡ããŠã5ç§éé²é³ããŠãoutput.wavãã«ä¿åããããã°ã©ã ã ã
"""PCã®ãã€ã¯ã䜿ã£ãŠé²é³ããŠWAVãã¡ã€ã«ã«ä¿åããããã°ã©ã """
import sounddevice as sd
import numpy as np
import torchaudio
import torch
# é²é³èšå® --- (*1)
SAMPLE_RATE = 16000 # ãµã³ããªã³ã°ã¬ãŒã
DURATION = 5 # é²é³æéïŒç§ïŒ
OUTPUT_FILE = "output.wav" # ä¿åããWAVãã¡ã€ã«å
def record_audio(): # --- (*2)
"""ãã€ã¯ããé³å£°ãé²é³ããŠWAVãã¡ã€ã«ã«ä¿åãã"""
print("é²é³éå§...")
audio = sd.rec(int(DURATION * SAMPLE_RATE), samplerate=SAMPLE_RATE, channels=1, dtype='float32')
sd.wait() # é²é³çµäºãåŸ
〠--- (*3)
print("é²é³çµäºãä¿åäž...")
# Numpyã®é
åãããã³ãœã«ã«å€æããŠtorchaudioã§ä¿å --- (*4)
audio_tensor = torch.from_numpy(audio.T)
torchaudio.save(OUTPUT_FILE, audio_tensor, SAMPLE_RATE)
print(f"WAVãã¡ã€ã«ãšããŠä¿åããŸãã: {OUTPUT_FILE}")
if __name__ == "__main__":
record_audio()
ããã°ã©ã ãå®è¡ããã«ã¯ãäžèšã®ããã°ã©ã ããrec.pyããšããååã§ä¿åããŠãã¿ãŒããã«ããæ¬¡ã®ã³ãã³ããå®è¡ãããããããšãé²é³ãå§ãŸãã5ç§åŸã«ãoutput.wavããåºåããã
python rec.py
ããã°ã©ã ã確èªããŠã¿ããã(ïŒ1)ã§ã¯ãé²é³ã«é¢ããèšå®ãèšè¿°ãããé³å£°èªèã¢ãã«ã®Whisperã¯ããµã³ããªã³ã°ã¬ãŒãã16,000Hzã®é³å£°ããŒã¿ã察象ã«ããã®ã§ããã®å€ã¯å€æŽããªãããã«ãããã
(ïŒ2)ã§ã¯ããã€ã¯ããã®é³å£°ãé²é³ãã(ïŒ3)ã®waitã¡ãœããã§é²é³çµäºãåŸ æ©ããããã«ããã
é²é³ãå®äºããåŸã(ïŒ4)ã§ã¯Numpyé åãPyTorchã®ãã³ãœã«ã«å€æããŠãã¡ã€ã«ã«ä¿åããã
WAVãã¡ã€ã«ãé³å£°èªèããããã°ã©ã
次ã«ãWAVãã¡ã€ã«ãoutput.wavããèªã¿èŸŒãã§é³å£°èªèãè¡ã£ãŠãããã¹ããšããŠåºåããããã°ã©ã ã確èªãããã
import torch
from transformers import pipeline
# 䜿çšããWhisperã¢ãã«ãšèšå® --- (*1)
MODEL = "openai/whisper-small" # å¿
èŠã«å¿ããŠå€æŽ
LANGUAGE = "japanese" # æ¥æ¬èªãå©çšãã
WAV_FILE = "output.wav" # é³å£°ãã¡ã€ã«å
# ããã€ã¹å€å®ïŒGPU察å¿ïŒ --- (*2)
DEVICE = "cpu"
if torch.cuda.is_available():
DEVICE = "cuda"
elif torch.backends.mps.is_available():
DEVICE = "mps"
print("䜿çšããã€ã¹:", DEVICE)
# ãã€ãã©ã€ã³ã®åæå --- (*3)
pipe = pipeline(
"automatic-speech-recognition",
model=MODEL,
device=DEVICE)
# é³å£°èªèã®å®è¡ --- (*4)
result = pipe(
WAV_FILE,
generate_kwargs={
"language": LANGUAGE,
"task": "transcribe"
})
# çµæã®è¡šç€º --- (*5)
print("èªèçµæ:")
print(result["text"])
ããã°ã©ã ãå®è¡ããã«ã¯ãäžèšã®ããã°ã©ã ããwav2text.pyããšããååã§ä¿åããŠãäžèšã®ã³ãã³ããå®è¡ãããããããšãå ã»ã©é²é³ããé³å£°ãoutput.wavããé³å£°èªèããŠãããã¹ããåºåããã
python wav2text.py
ããã°ã©ã ã®ãã€ã³ãã確èªããããä»åã®ããã°ã©ã ã¯ãAIã§äººæ°ã®ããã±ãŒãžtransformersãå©çšããŠãé³å£°èªèãè¡ããã®ãšãªã£ãŠããã
(ïŒ1)ã§ã¯ãWhisperã®é³å£°èªèã«äœ¿ãã¢ãã«ãèšèªãæå®ããã
(ïŒ2)ã§ã¯ãNVIDIAã®CUDAã䜿ããå Žåã«ãGPUã䜿ãããã«ããã€ã¹å€å®ãããŠããããªãããmpsããšããã®ã¯ãmacOSã§Appleã·ãªã³ã³ã䜿ã£ãŠããå Žåã§ããããæå®ãããšmacOSã§ã®æšè«èœåãå€§å¹ ã«åäžããã
(ïŒ3)ã¯ãWhisperã®é³å£°èªèã¢ãã«ãèªã¿èŸŒãã§ããã€ãã©ã€ã³ãåæåãããããã§ã¯ã(ïŒ1)ã§æå®ãããããŸãèœåã®é«ããªã"openai/whisper-small"ãšããã¢ãã«ãå©çšããŠé³å£°èªèãè¡ãæºåãè¡ã£ãŠããããããPCã®èœåãé«ãå Žåã«ã¯ã倿°MODELããopenai/whisper-large-v3ããªã©ãšå€æŽãããšãèªè粟床ãå€§å¹ ã«åäžããã
(ïŒ4)ã§ã¯ãé³å£°èªèãå®è¡ããŠ(ïŒ5)ã§çµæãç»é¢ã«åºåããã
ãªã¢ã«ã¿ã€ã é³å£°èªèã®ããŒã«ã®ããã°ã©ã
äžèšã®ããã°ã©ã ããæ£ããåãããšã確èªã§ãããããªã¢ã«ã¿ã€ã ã®é³å£°èªèããŒã«ãäœã£ãŠã¿ããããªããä»åäœã£ãããã°ã©ã ã¯123è¡ããããªã¢ã«ã¿ã€ã ã®é³å£°èªèããŒã«ã®ããã°ã©ã ãšããŠã¯ããšãŠãçããã®ã®ãã³ã©ã ã§ç޹ä»ããã«ã¯ãå°ãé·ãã
ããã§ãããã°ã©ã å šäœããã¡ãã®Gist( https://gist.github.com/kujirahand/3ec6b35ba27f58b1ac596cf8a2db9447 )ã«ã¢ããããŒããããå®å šçã¯ãã¡ããåç §ããŠã»ãããããã§ã¯ãããã°ã©ã ãå°ããã€æç²ããŠã解説ããŠãããã
# å©çšããé³å£°èªèã¢ãã«ãæå® --- (*1)
MODEL = "openai/whisper-large-v3"
# MODEL = "openai/whisper-medium"
# MODEL = "openai/whisper-small"
ãªãã以äžã®éšåã¯ããªãŒãã£ãªãé³å£°èªèã®èšå®ãæå®ããŠãããä»åã¯ããªã¢ã«ã¿ã€ã ã®é³å£°èªèããŒã«ãšããããšã§ãç¡é³éšåãã¹ãããããä»çµã¿ã«ããããã®ãããSILENCE_ããã¯ããŸãèšå®ã倿Žããããšã§ç¡é³å€å®ã®ç²ŸåºŠãåäžãããããšãã§ãããç¹ã«ã(ïŒ3)ã®SILENCE_THRESHOLDã®å€ãå°ãããããšãå€å®ãç·©ããªãå°ããªé³ã§ããé³å£°ãšããŠå€å®ããããã«ãªãã
# ãªãŒãã£ãªèšå® --- (*2)
SAMPLE_RATE = 16000 # Whisperã¯16kHzã«å¯Ÿå¿
CB_DURATION = 0.2 # ã³ãŒã«ããã¯ã®ãããã¯ãµã€ãºïŒç§ïŒ
ASR_DURATION = 5 # é³å£°èªèã®æéïŒç§ïŒ
LANGUAGE = "japanese" # èšèªèšå®ïŒæ¥æ¬èªïŒ
SILENCE_THRESHOLD = 0.003 # ç¡é³ãšå€å®ãããããå€ --- (*3)
SILENCE_THRESHOLD_L = 0.01 # é·ãé³å£°ããŒã¿ãç¡é³ãšå€å®ãããããå€
SILENCE_TIMEOUT = 1.0 # ç¡é³ãç¶ããæã®é³å£°èªèå®è¡ãŸã§ã®æéïŒç§ïŒ
以äžã®(ïŒ4)ã®éšåã¯ãé³å£°èªèã¢ãã«ãèªã¿èŸŒãã§ãã€ãã©ã€ã³ãåæåããŠããã
# ãã€ãã©ã€ã³ã®åæå --- (*4)
# âŠçç¥âŠ
pipe = pipeline("automatic-speech-recognition", model=MODEL, device=DEVICE)
print("### Whisperã¢ãã«ãããŒãããŸããã")
以äžã®(ïŒ5)ã§ã¯ãé³å£°ããŒã¿ã管çãããã¥ãŒæ§é ã®å€æ°ãåæåãããä»åãé³å£°ã®é²é³ãšé³å£°èªèããªã¢ã«ã¿ã€ã ã«è¡ããããããããã®åŠçãå¥ã ã®ã¹ã¬ããã§å®è¡ããããã®éãåŠçãå¹²æžããªãããã«ãã¹ã¬ããã»ãŒããªãã¥ãŒæ§é ã䜿ã£ãŠãã¹ã¬ããéã§å®å šã«ããŒã¿ãããåãã§ããããã«ããã
# é³å£°å
¥åã®ãã¥ãŒãåæå --- (*5)
audio_q = queue.Queue()
ããã§ãå®éã«é³å£°ãé²é³ããŠããã¥ãŒã«ããŒã¿ã远å ããã®ãã以äžã®éšåã ã(ïŒ5)ã®éšåã§ãsd.InputStreamã¡ãœãããå®è¡ãããšãé²é³ãã¯ããŸããäžå®ã®é³å£°ããŒã¿ãåŸããšãåŒæ°callbackã«æå®ãã(*6)ã®é¢æ°ãåŒã³åºãä»çµã¿ãšãªã£ãŠããããããŠã(ïŒ6)ã®é¢æ°ãåŒã³åºãããã¿ã€ãã³ã°ã§ããã¥ãŒæ§é ã®å€æ°audio_qã«ããŒã¿ã远å ããŠããã
def callback(indata, _frames, _time, status):
""" é³å£°å
¥åã®ã³ãŒã«ããã¯é¢æ° """ # --- (*6)
# âŠçç¥âŠ
audio_q.put(indata[:, 0].copy())
âŠ
def main():
""" ãã€ã¯ããé²é³éå§ """ # --- (*15)
# âŠé³å£°å
¥åâŠ
try:
with sd.InputStream(
samplerate=SAMPLE_RATE, channels=1,
callback=callback,
blocksize=int(SAMPLE_RATE * CB_DURATION)):
while True:
time.sleep(1)
except KeyboardInterrupt:
print("<<< çµäºããŸããã")
ãªããäžèšã§ã¯åŠçãåãããããèŠãããããçç¥ããã®ã ããmain颿°å®è¡æã«ãé³å£°èªèã®ã¹ã¬ãããå®è¡ããŠãããasr_workerãšããã®ããé³å£°ããŒã¿audio_qãåãåã£ãŠãç¡é³ã§ãªããã°ãé³å£°èªèãå®è¡ããåŠçãå®è¡ãã颿°ã ã
threading.Thread(target=asr_worker, daemon=True).start()
ãããŠãå®éã®é¢æ°asr_workerã®å®çŸ©ã¯æ¬¡ã®ãããªãã®ãšãªã£ãŠããã(ïŒ8)ã§ãã¥ãŒããé³å£°ããŒã¿ãåãåºãã(ïŒ9)ã§ãããç¡é³ããã§ãã¯ãããäžå®æéãç¡é³ãç¶ããã匷å¶çã«é³å£°èªèãè¡ãããããã§ãªãå Žåã(ïŒ10)ã§é次é³å£°ãããã¡ã«ããŒã¿ã远å ããŠãã£ãŠããããã¡ãäžæ¯ã«ãªã£ãæç¹(ïŒ11)ã§ãé³å£°èªèãå®è¡ããã
def asr_worker():
""" é³å£°èªèãè¡ãã¯ãŒã«ãŒã¹ã¬ãã """
buffer = []
silence_count = 0 # ç¡é³ã®é£ç¶åæ°ãã«ãŠã³ã
while True:
data = audio_q.get() # --- (*8)
# ç¡é³æ€åºïŒååŸããé³å£°ããŒã¿ãç¡é³ããã§ã㯠--- (*9)
if is_silent(data):
silence_count += 1
# ç¡é³ãäžå®æéç¶ãããé³å£°èªèãå®è¡
if buffer and silence_count >= int(SILENCE_TIMEOUT / CB_DURATION):
# é³å£°èªèãå®è¡
total = np.concatenate(buffer)
perform_asr(total)
buffer.clear()
silence_count = 0
continue
# é³å£°ããŒã¿ãæ€åºããããç¡é³ã«ãŠã³ãããªã»ãã
silence_count = 0
# ãããã¡ã«é³å£°ããŒã¿ã远å --- (*10)
buffer.append(data)
total = np.concatenate(buffer)
if len(total) < SAMPLE_RATE * ASR_DURATION:
continue
# é³å£°ããŒã¿ãååãªé·ãã«ãªã£ããé³å£°èªèãå®è¡ --- (*11)
perform_asr(total)
buffer.clear()
ãªããé³å£°ããŒã¿ãç¡é³ãã©ãããå€å®ããã®ã«ã次ã®ãããªé¢æ°is_silentãå®çŸ©ãããããã§ã¯ã(ïŒ7)ã«ããããã«ãRMSïŒRoot Mean SquareïŒãšããææ³ã§ç¡é³ãã©ãããç°¡æãã§ãã¯ããŠããã
def is_silent(audio_data, threshold=SILENCE_THRESHOLD):
"""é³å£°ããŒã¿ãç¡é³ãã©ãããå€å®ãã颿°"""
# RMSïŒRoot Mean SquareïŒãèšç®ããŠé³å£°ã¬ãã«ãå€å® --- (*7)
rms = np.sqrt(np.mean(audio_data ** 2))
return rms < threshold
é³å£°ããŒã¿ãåãåã£ãŠé³å£°èªèãè¡ãã®ãã以äžã®é¢æ°perform_asrã ã(ïŒ12)ã§ç¡é³ãã©ãããå床ãã§ãã¯ããŠãç¡é³ãªãäœãããªãã§é¢æ°ãæããã(ïŒ13)ã§ã¯ãé³å£°ããŒã¿ãäžåºŠãã³ãã©ãªãã¡ã€ã«ã«ä¿åããŠã(ïŒ14)ã§é³å£°èªèãå®è¡ããã
def perform_asr(audio_data):
"""é³å£°èªèãå®è¡ãã颿°"""
# ç¡é³ããã§ãã¯ããŠç¡é³ãªãäœãããªã --- (*12)
if is_silent(audio_data, threshold=SILENCE_THRESHOLD_L):
print(">>> (ç¡é³)")
return
audio = torch.from_numpy(audio_data).float()
# äžæãã¡ã€ã«ã«ä¿åããŠé³å£°èªè --- (*13)
torchaudio.save(TEMP_FILE, audio.unsqueeze(0), sample_rate=SAMPLE_RATE, format="wav")
try:
# é³å£°èªèãå®è¡ --- (*14)
result = pipe(
TEMP_FILE,
generate_kwargs={
"language":LANGUAGE,
"task":"transcribe"})
# çµæã衚瀺
text = ""
if result:
text = str(result.get("text", "")).strip()
print(">>> [é³å£°èªè]", text)
except Exception as e:
print(f">>> é³å£°èªèãšã©ãŒ: {e}")
ããã°ã©ã ãå®è¡ããã«ã¯ããã¡ã( https://gist.github.com/kujirahand/3ec6b35ba27f58b1ac596cf8a2db9447 )ã®ããã°ã©ã ããasr.pyããšããååã§ä¿åããŠã次ã®ã³ãã³ããå®è¡ããã
python asr.py
ãªããåé ã§ç޹ä»ããæé ã«åŸã£ãŠãvenvã®ä»®æ³ç°å¢äžã§ã©ã€ãã©ãªãã€ã³ã¹ããŒã«ããŠããå®è¡ãããã
ããã°ã©ã ãå®è¡ãããšãé²é³ãéå§ããããªã¢ã«ã¿ã€ã ã«é³å£°èªèãè¡ãããŠçµæãç»é¢ã«è¡šç€ºãããã
ãŸãšã
以äžãä»åã¯ããªã¢ã«ã¿ã€ã ã®é³å£°èªèããŒã«ãäœã£ãŠã¿ããããã123è¡ã®ããã°ã©ã ã§ãããŒã«ã宿ãããããšãã§ãããããããããPythonã®AIé¢é£ã®ããŒã«ã¯ãããªãå å®ããŠããããšãåããã
ãšã¯èšããããçšåºŠãPCã®ã¹ããã¯ãè¯ããªããšãªã¢ã«ã¿ã€ã ã§åããŠãããšããæãã«ã¯ãªããªããããããªããããããWhisperãæäŸããŠããOpenAIã¯ãäœã¹ããã¯ã®ãã·ã³ã§ãWhisperã®é³å£°èªèãã§ããããã«ãææã®APIãæäŸããŠããã®ã§ãé³å£°èªèéšåã ãã¯ãææã®OpenAIã®APIãåŒã³åºãããã«ä¿®æ£ããããšãã§ããã
ãªã¢ã«ã¿ã€ã ã®é³å£°èªèã¯ããããããªçšéã§å©çšã§ããã®ã§ãæ¬çš¿ãåèã«ãæ¹è¯ããŠæ¬æ Œçãªãã®ãäœã£ãŠã¿ããšè¯ãã ããã
èªç±åããã°ã©ããŒããããã¯ãã©ã«ãŠãããã°ã©ãã³ã°ã®æ¥œãããäŒããæŽ»åãããŠããã代衚äœã«ãæ¥æ¬èªããã°ã©ãã³ã°èšèªããªã§ããã ãããã¹ã鳿¥œããµã¯ã©ããªã©ã2001幎ãªã³ã©ã€ã³ãœãã倧è³å ¥è³ã2004幎床æªèžãŠãŒã¹ ã¹ãŒããŒã¯ãªãšãŒã¿èªå®ã2010幎 OSSè²¢ç®è ç« åè³ããããŸã§50å以äžã®æè¡æžãå·çãããçŽè¿ã§ã¯ããå€§èŠæš¡èšèªã¢ãã«ã䜿ãããªãããã®ããã³ãããšã³ãžãã¢ãªã³ã°ã®æç§æž(ãã€ããåºç)ããPythonã§ã€ãããã¹ã¯ãããã¢ããª(ãœã·ã )ããå®è·µåã身ã«ã€ãã Pythonã®æç§æž 第2çããã·ãŽããã¯ãã©ã PythonèªååŠçã®æç§æž(ãã€ããåºç)ããªã©ã

