• Python Record


    1. partial 

    1. from functools import partial
    2. # A normal function
    3. def f(a, b, c, x):
    4. print("a",a)
    5. print("b",b)
    6. print("c",c)
    7. print("x",x)
    8. return 1000*a + 100*b + 10*c + x
    9. # A partial function that calls f with
    10. g = partial(f, 3, 1, x=4)
    11. # Calling g()
    12. print(g(5))

     a= 3, b=1, c=5, x=4, 3154

    2. nn. LayerNorm

    1. torch.nn.LayerNorm(normalized_shape, eps=1e-05,
    2. elementwise_affine=True, device=None, dtype=None)

          normalized_shape (int or list or torch.Size) –input shape from an expected input of size

             [∗×normalized_shape[0]×normalized_shape[1]×…×normalized_shape[−1]]

    • If a single integer is used, it is treated as a singleton list, and this module will normalize over the last dimension which is expected to be of that specific size.

    • eps – a value added to the denominator for numerical stability. Default: 1e-5

    • elementwise_affine – a boolean value that when set to True, this module has learnable per-element affine parameters initialized to ones (for weights) and zeros (for biases). Default: True.

    1. # NLP Example
    2. batch, sentence_length, embedding_dim = 20, 5, 10
    3. embedding = torch.randn(batch, sentence_length, embedding_dim)
    4. layer_norm = nn.LayerNorm(embedding_dim)
    5. # Activate module
    6. layer_norm(embedding)
    7. # Image Example
    8. N, C, H, W = 20, 5, 10, 10
    9. input = torch.randn(N, C, H, W)
    10. # Normalize over the last three dimensions (i.e. the channel and spatial dimensions)
    11. # as shown in the image below
    12. layer_norm = nn.LayerNorm([C, H, W])
    13. output = layer_norm(input)

    3. a or b

    1. t1=None
    2. t2=2
    3. t1= t1 or t2
    4. print(t1)

    Results: 2 

    1. t1=1
    2. t2=2
    3. t1= t1 or t2
    4. print(t1)

    Results:1 

    4. nn.GELU

    torch.nn.GELU(approximate='none')

    Applies the Gaussian Error Linear Units functionwhere \Phi(x)Φ(x) is the Cumulative Distribution Function for Gaussian Distribution.

    1. >>> m = nn.GELU()
    2. >>> input = torch.randn(2)
    3. >>> output = m(input)

    5. nn.Dropout

    torch.nn.Dropout(p=0.5, inplace=False)
    1. >>> m = nn.Dropout(p=0.2)
    2. >>> input = torch.randn(20, 16)
    3. >>> output = m(input)

    6. nn.linspace

    1. # Importing the PyTorch library
    2. import torch
    3. # Applying the linspace function and
    4. # storing the resulting tensor in 't'
    5. a = torch.linspace(3, 10, 5)
    6. print("a = ", a)
    7. b = torch.linspace(start =-10, end = 10, steps = 5)
    8. print("b = ", b)

    a =  tensor([ 3.0000,  4.7500,  6.5000,  8.2500, 10.0000])
    b =  tensor([-10.,  -5.,   0.,   5.,  10.])

    7. nn.indetity

    1. seed =4
    2. m = nn.Identity()
    3. input = torch.randn(4,4)
    4. output = m(input)
    5. print("m",m)
    6. print("input",input)
    7. print("output",output)
    8. print(output.size())

    8. nn.init.trunc_normal_()

    torch.nn.init.trunc_normal_(tensor, mean=0.0, std=1.0, a=- 2.0, b=2.0)

     Fills the input Tensor with values drawn from a truncated normal distribution. The values are effectively drawn from the normal distribution

     \mathcal{N}(\text{mean}, \text{std}^2)N(mean,std2) 

    with values outside [a, b][a,b] redrawn until they are within the bounds. The method used for generating the random values works best when a

    \leq \text{mean} \leq ba≤mean≤b.

    9. OrderedDictT

    1. from collections import OrderedDict
    2. print("This is a Dict:\n")
    3. d = {}
    4. d['a'] = 1
    5. d['b'] = 2
    6. d['c'] = 3
    7. d['d'] = 4
    8. for key, value in d.items():
    9. print(key, value)
    10. print("\nThis is an Ordered Dict:\n")
    11. od = OrderedDict()
    12. od['a'] = 1
    13. od['b'] = 2
    14. od['c'] = 3
    15. od['d'] = 4
    16. for key, value in od.items():
    17. print(key, value)
    18. print("\nThis is an *****Ordered Dict:\n")
    19. od = OrderedDict([("a1",1),("b1",2),("c1",3),("d1",4)])
    20. for key, value in od.items():
    21. print(key, value)

     The above three oupts are the same

    10. random.seed()

    How Seed Function Works ?

    Seed function is used to save the state of a random function, so that it can generate same random numbers on multiple executions of the code on the same machine or on different machines (for a specific seed value). The seed value is the previous value number generated by the generator. For the first time when there is no previous value, it uses current system time. 

    Using random.seed() function

    Here we will see how we can generate the same random number every time with the same seed value. 

    1. # random module is imported
    2. import random
    3. for i in range(5):
    4. # Any number can be used in place of '0'.
    5. random.seed(0)
    6. # Generated random number will be between 1 to 1000.
    7. print(random.randint(1, 1000))
    8. Output:
    9. 865
    10. 865
    11. 865
    12. 865
    13. 865
    14. # importing random module
    15. import random
    16. random.seed(3)
    17. # print a random number between 1 and 1000.
    18. print(random.randint(1, 1000))
    19. # if you want to get the same random number again then,
    20. random.seed(3)
    21. print(random.randint(1, 1000))
    22. # If seed function is not used
    23. # Gives totally unpredictable responses.
    24. print(random.randint(1, 1000))
    25. Output:
    26. 244
    27. 244
    28. 607

     random.seed(0)  the next random.randint(1) get same int. 

    random.seed(1)   the next random.randint(1) get different int.

    11. Tensor, List   Convert each other

    a. Tensors concate to a Tensor

    1. import torch
    2. #step: generate a tensor with shape([5,3,3,4])
    3. # we can think B is 5, N_views is 3, Token is 3, Dim is 4
    4. a=torch.arange(5*3*3*4).reshape([5,3,3,4])
    5. print(a.shape)
    6. print("a=",a)
    7. torch.Size([5, 3, 3, 4])
    8. a= tensor([[[[ 0, 1, 2, 3],
    9. [ 4, 5, 6, 7],
    10. [ 8, 9, 10, 11]],
    11. [[ 12, 13, 14, 15],
    12. [ 16, 17, 18, 19],
    13. [ 20, 21, 22, 23]],
    14. [[ 24, 25, 26, 27],
    15. [ 28, 29, 30, 31],
    16. [ 32, 33, 34, 35]]],
    17. [[[ 36, 37, 38, 39],
    18. [ 40, 41, 42, 43],
    19. [ 44, 45, 46, 47]],
    20. [[ 48, 49, 50, 51],
    21. [ 52, 53, 54, 55],
    22. [ 56, 57, 58, 59]],
    23. [[ 60, 61, 62, 63],
    24. [ 64, 65, 66, 67],
    25. [ 68, 69, 70, 71]]],
    26. [[[ 72, 73, 74, 75],
    27. [ 76, 77, 78, 79],
    28. [ 80, 81, 82, 83]],
    29. [[ 84, 85, 86, 87],
    30. [ 88, 89, 90, 91],
    31. [ 92, 93, 94, 95]],
    32. [[ 96, 97, 98, 99],
    33. [100, 101, 102, 103],
    34. [104, 105, 106, 107]]],
    35. [[[108, 109, 110, 111],
    36. [112, 113, 114, 115],
    37. [116, 117, 118, 119]],
    38. [[120, 121, 122, 123],
    39. [124, 125, 126, 127],
    40. [128, 129, 130, 131]],
    41. [[132, 133, 134, 135],
    42. [136, 137, 138, 139],
    43. [140, 141, 142, 143]]],
    44. [[[144, 145, 146, 147],
    45. [148, 149, 150, 151],
    46. [152, 153, 154, 155]],
    47. [[156, 157, 158, 159],
    48. [160, 161, 162, 163],
    49. [164, 165, 166, 167]],
    50. [[168, 169, 170, 171],
    51. [172, 173, 174, 175],
    52. [176, 177, 178, 179]]]])
    53. a_local2=[]
    54. a=a.reshape(3,5,3,4)
    55. print("a.reshape",a)
    56. a.reshape tensor([[[[ 0, 1, 2, 3],
    57. [ 4, 5, 6, 7],
    58. [ 8, 9, 10, 11]],
    59. [[ 12, 13, 14, 15],
    60. [ 16, 17, 18, 19],
    61. [ 20, 21, 22, 23]],
    62. [[ 24, 25, 26, 27],
    63. [ 28, 29, 30, 31],
    64. [ 32, 33, 34, 35]],
    65. [[ 36, 37, 38, 39],
    66. [ 40, 41, 42, 43],
    67. [ 44, 45, 46, 47]],
    68. [[ 48, 49, 50, 51],
    69. [ 52, 53, 54, 55],
    70. [ 56, 57, 58, 59]]],
    71. [[[ 60, 61, 62, 63],
    72. [ 64, 65, 66, 67],
    73. [ 68, 69, 70, 71]],
    74. [[ 72, 73, 74, 75],
    75. [ 76, 77, 78, 79],
    76. [ 80, 81, 82, 83]],
    77. [[ 84, 85, 86, 87],
    78. [ 88, 89, 90, 91],
    79. [ 92, 93, 94, 95]],
    80. [[ 96, 97, 98, 99],
    81. [100, 101, 102, 103],
    82. [104, 105, 106, 107]],
    83. [[108, 109, 110, 111],
    84. [112, 113, 114, 115],
    85. [116, 117, 118, 119]]],
    86. [[[120, 121, 122, 123],
    87. [124, 125, 126, 127],
    88. [128, 129, 130, 131]],
    89. [[132, 133, 134, 135],
    90. [136, 137, 138, 139],
    91. [140, 141, 142, 143]],
    92. [[144, 145, 146, 147],
    93. [148, 149, 150, 151],
    94. [152, 153, 154, 155]],
    95. [[156, 157, 158, 159],
    96. [160, 161, 162, 163],
    97. [164, 165, 166, 167]],
    98. [[168, 169, 170, 171],
    99. [172, 173, 174, 175],
    100. [176, 177, 178, 179]]]])
    101. for j in range(a.shape[0]):
    102. a_local2.append(a[j,:,:,:])
    103. print(torch.cat(a_local2))
    104. torch.cat(a_local2).shape torch.Size([15, 3, 4])
    105. tensor([[[ 0, 1, 2, 3],
    106. [ 4, 5, 6, 7],
    107. [ 8, 9, 10, 11]],
    108. [[ 12, 13, 14, 15],
    109. [ 16, 17, 18, 19],
    110. [ 20, 21, 22, 23]],
    111. [[ 24, 25, 26, 27],
    112. [ 28, 29, 30, 31],
    113. [ 32, 33, 34, 35]],
    114. [[ 36, 37, 38, 39],
    115. [ 40, 41, 42, 43],
    116. [ 44, 45, 46, 47]],
    117. [[ 48, 49, 50, 51],
    118. [ 52, 53, 54, 55],
    119. [ 56, 57, 58, 59]],
    120. [[ 60, 61, 62, 63],
    121. [ 64, 65, 66, 67],
    122. [ 68, 69, 70, 71]],
    123. [[ 72, 73, 74, 75],
    124. [ 76, 77, 78, 79],
    125. [ 80, 81, 82, 83]],
    126. [[ 84, 85, 86, 87],
    127. [ 88, 89, 90, 91],
    128. [ 92, 93, 94, 95]],
    129. [[ 96, 97, 98, 99],
    130. [100, 101, 102, 103],
    131. [104, 105, 106, 107]],
    132. [[108, 109, 110, 111],
    133. [112, 113, 114, 115],
    134. [116, 117, 118, 119]],
    135. [[120, 121, 122, 123],
    136. [124, 125, 126, 127],
    137. [128, 129, 130, 131]],
    138. [[132, 133, 134, 135],
    139. [136, 137, 138, 139],
    140. [140, 141, 142, 143]],
    141. [[144, 145, 146, 147],
    142. [148, 149, 150, 151],
    143. [152, 153, 154, 155]],
    144. [[156, 157, 158, 159],
    145. [160, 161, 162, 163],
    146. [164, 165, 166, 167]],
    147. [[168, 169, 170, 171],
    148. [172, 173, 174, 175],
    149. [176, 177, 178, 179]]])
    150. c=torch.cat(a_local2).reshape(5,3,3,4)
    151. print("c=",c)
    152. c= tensor([[[[ 0, 1, 2, 3],
    153. [ 4, 5, 6, 7],
    154. [ 8, 9, 10, 11]],
    155. [[ 12, 13, 14, 15],
    156. [ 16, 17, 18, 19],
    157. [ 20, 21, 22, 23]],
    158. [[ 24, 25, 26, 27],
    159. [ 28, 29, 30, 31],
    160. [ 32, 33, 34, 35]]],
    161. [[[ 36, 37, 38, 39],
    162. [ 40, 41, 42, 43],
    163. [ 44, 45, 46, 47]],
    164. [[ 48, 49, 50, 51],
    165. [ 52, 53, 54, 55],
    166. [ 56, 57, 58, 59]],
    167. [[ 60, 61, 62, 63],
    168. [ 64, 65, 66, 67],
    169. [ 68, 69, 70, 71]]],
    170. [[[ 72, 73, 74, 75],
    171. [ 76, 77, 78, 79],
    172. [ 80, 81, 82, 83]],
    173. [[ 84, 85, 86, 87],
    174. [ 88, 89, 90, 91],
    175. [ 92, 93, 94, 95]],
    176. [[ 96, 97, 98, 99],
    177. [100, 101, 102, 103],
    178. [104, 105, 106, 107]]],
    179. [[[108, 109, 110, 111],
    180. [112, 113, 114, 115],
    181. [116, 117, 118, 119]],
    182. [[120, 121, 122, 123],
    183. [124, 125, 126, 127],
    184. [128, 129, 130, 131]],
    185. [[132, 133, 134, 135],
    186. [136, 137, 138, 139],
    187. [140, 141, 142, 143]]],
    188. [[[144, 145, 146, 147],
    189. [148, 149, 150, 151],
    190. [152, 153, 154, 155]],
    191. [[156, 157, 158, 159],
    192. [160, 161, 162, 163],
    193. [164, 165, 166, 167]],
    194. [[168, 169, 170, 171],
    195. [172, 173, 174, 175],
    196. [176, 177, 178, 179]]]])
    197. a_local2=torch.cat(a_local2).reshape(5,9,4)
    198. print("torch_a.shape",a_local2.shape)
    199. print("torch_a",a_local2)
    200. torch_a.shape torch.Size([5, 9, 4])
    201. torch_a tensor([[[ 0, 1, 2, 3],
    202. [ 4, 5, 6, 7],
    203. [ 8, 9, 10, 11],
    204. [ 12, 13, 14, 15],
    205. [ 16, 17, 18, 19],
    206. [ 20, 21, 22, 23],
    207. [ 24, 25, 26, 27],
    208. [ 28, 29, 30, 31],
    209. [ 32, 33, 34, 35]],
    210. [[ 36, 37, 38, 39],
    211. [ 40, 41, 42, 43],
    212. [ 44, 45, 46, 47],
    213. [ 48, 49, 50, 51],
    214. [ 52, 53, 54, 55],
    215. [ 56, 57, 58, 59],
    216. [ 60, 61, 62, 63],
    217. [ 64, 65, 66, 67],
    218. [ 68, 69, 70, 71]],
    219. [[ 72, 73, 74, 75],
    220. [ 76, 77, 78, 79],
    221. [ 80, 81, 82, 83],
    222. [ 84, 85, 86, 87],
    223. [ 88, 89, 90, 91],
    224. [ 92, 93, 94, 95],
    225. [ 96, 97, 98, 99],
    226. [100, 101, 102, 103],
    227. [104, 105, 106, 107]],
    228. [[108, 109, 110, 111],
    229. [112, 113, 114, 115],
    230. [116, 117, 118, 119],
    231. [120, 121, 122, 123],
    232. [124, 125, 126, 127],
    233. [128, 129, 130, 131],
    234. [132, 133, 134, 135],
    235. [136, 137, 138, 139],
    236. [140, 141, 142, 143]],
    237. [[144, 145, 146, 147],
    238. [148, 149, 150, 151],
    239. [152, 153, 154, 155],
    240. [156, 157, 158, 159],
    241. [160, 161, 162, 163],
    242. [164, 165, 166, 167],
    243. [168, 169, 170, 171],
    244. [172, 173, 174, 175],
    245. [176, 177, 178, 179]]])

    12. Choose a Dimension in Multi-Dimension Tensors 

    1. a=torch.arange(3*2*10).reshape([3,2,10])
    2. print("a.shape",a.shape)
    3. print("a=",a)
    4. print("a[:,0]",a[:,0])
    5. print("a[0,:]",a[0,:])
    6. Results:
    7. a.shape torch.Size([3, 2, 10])
    8. a= tensor([[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
    9. [10, 11, 12, 13, 14, 15, 16, 17, 18, 19]],
    10. [[20, 21, 22, 23, 24, 25, 26, 27, 28, 29],
    11. [30, 31, 32, 33, 34, 35, 36, 37, 38, 39]],
    12. [[40, 41, 42, 43, 44, 45, 46, 47, 48, 49],
    13. [50, 51, 52, 53, 54, 55, 56, 57, 58, 59]]])
    14. a[:,0] tensor([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
    15. [20, 21, 22, 23, 24, 25, 26, 27, 28, 29],
    16. [40, 41, 42, 43, 44, 45, 46, 47, 48, 49]])
    17. a[0,:] tensor([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
    18. [10, 11, 12, 13, 14, 15, 16, 17, 18, 19]])

    Chapter 2

    2.1  random.sample()

    1. from random import sample
    2. # Prints list of random items of given length
    3. list1 = [1, 2, 3, 4, 5]
    4. print(sample(list1,3))

    Output:

    [2, 3, 5]

    Chapter 3 << pytroch>>

    3.1 transforms.Compose() 

    torchvision.transformers is pythorh packages of image preprocessing.

    Generally, Compose( ) is used to combine/intergrate multiple steps together. 

    1. #For example
    2. transformer.Compose([
    3. transforms.CenterCrop(224),
    4. transforms.RandomResizedCrop(224),
    5. transforms.ToTensor(),
    6. transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]),
    7. transforms.Resize(256),
    8. ])

     

     

     

  • 相关阅读:
    染色法判定二分图的算法
    竞赛 深度学习 python opencv 火焰检测识别
    在线客服系统统计员工的一些工作量,有哪些统计维度?
    华为数字化转型之道认知篇第一章数字化转型,华为的战略选择
    如何使用Flask搭建web程序框架并实现无公网IP远程访问本地程序
    MLC-LLM 部署RWKV World系列模型实战(3B模型Mac M2解码可达26tokens/s)
    计算机毕设(附源码)JAVA-SSM家用电器电商网站设计
    Redis 集群
    CentOS 7 安装 Redis 7
    Liunx 重置MySQL用户密码
  • 原文地址:https://blog.csdn.net/qq_40837542/article/details/127575584