Python-Selenium & BeautifulSoup详解


Python-Selenium API

Selenium with Python

Selenium with Python中文

安装

介绍

Selenium Python 绑定提供了一个简单的 API 来使用 Selenium WebDriver 编写功能/验收测试。通过 Selenium Python API,您可以直观地访问 Selenium WebDriver 的所有功能。

Selenium Python 绑定提供了一个方便的 API 来访问 Selenium WebDrivers,如 Firefox、Ie、Chrome、Remote 等。当前支持的 Python 版本是 3.5 及更高版本。

为 Selenium 安装 Python 绑定

使用pip安装 selenium 包。Python 3 在标准库中有 pip 可用。使用pip,您可以像这样安装:

pip install selenium

您可以考虑使用virtualenv创建隔离的 Python 环境。Python 3 的venv几乎与 virtualenv 相同。

您还可以从selenium 包PyPI 页面下载 Selenium 的 Python 绑定。并手动安装。

Windows用户的详细说明

请在有网的情况下执行该安装命令。

  1. 安装Python3.5:官方下载页.

  2. 从开始菜单点击运行(或者Windows+R)输入cmd,然后执行下列命令安装:

C:\Python35\Scripts\pip.exe install selenium

现在你可以使用Python运行测试脚本了。 例如:如果你创建了一个selenium的基本示例并且保存在了C:my_selenium_script.py,你可以如下执行:

C:\Python35\python.exe C:\my_selenium_script.py

下载 Selenium 服务器

如果你想使用一个远程的WebDriver,Selenium服务是唯一的依赖, 参见 使用远程 Selenium WebDriver 获得更多细节。

Selenium server是一个JAVA工程,Java Runtime Environment (JRE) 1.6或者更高的版本是推荐的运行环境。

你可以在 该下载页 下载2.x的Selenium server,这个文件大概长成这个样子:selenium-server-standalone-2.x.x.jar, 你可以去下载最新版本的2.x server。

如果你还没有安装Java Runtime Environment (JRE)的话呢,在这下载, 如果你是有的是GNU/Linux系统,并且巧了,你还有root权限,你还可以使用操作系统指令去安装JRE。

如果你把java命令放在了PATH(环境变量)中的话,使用下面命令安装:

java -jar selenium-server-standalone-2.x.x.jar

当然了,把2.x.x换成你下载的实际版本就可以了。

如果是不是root用户你或者没有把JAVA放到PATH中, 你可以使用绝对路径或者相对路径的方式来使用命令, 这个命令大概长这样子:

/path/to/java -jar /path/to/selenium-server-standalone-2.x.x.jar

快速入门

简单用例

如果你已经安装好了selenium,你可以把下面的python代码拷贝到你的编辑器中

from selenium import webdriver
from selenium.webdriver.common.keys import Keys

driver = webdriver.Firefox()
driver.get("http://www.python.org")
assert "Python" in driver.title
elem = driver.find_element_by_name("q")
elem.clear()
elem.send_keys("pycon")
elem.send_keys(Keys.RETURN)
assert "No results found." not in driver.page_source
driver.close()

上面的脚本可以保存到一个文件(如:- python_org_search.py),那么可以这样使用

python python_org_search.py

你运行的 python 环境中应该已经安装了 selenium 模块。

示例详解

selenium.webdriver 模块提供了所有WebDriver的实现, 当前支持的WebDriver有: Firefox, Chrome, IE and Remote。 Keys类提供键盘按键的支持,比如:RETURN, F1, ALT等

from selenium import webdriver
from selenium.webdriver.common.keys import Keys

接下来,创建一个Firefox WebDriver的实例

driver = webdriver.Firefox()

driver.get 方法将打开URL中填写的地址,WebDriver 将等待, 直到页面完全加载完毕(其实是等到”onload” 方法执行完毕),然后返回继续执行你的脚本。 值得注意的是,如果你的页面使用了大量的Ajax加载, WebDriver可能不知道什么时候页面已经完全加载:

driver.get("http://www.python.org")

下一行是用assert的方式确认标题是否包含“Python”一词。 (译注:assert 语句将会在之后的语句返回false后抛出异常,详细内容可以自行百度)

assert "Python" in driver.title

WebDriver 提供了大量的方法让你去查询页面中的元素,这些方法形如: find_element_by_*。 例如:包含 name 属性的input输入框可以通过 find_element_by_name 方法查找到, 详细的查找方法可以在第四节元素查找中查看:

elem = driver.find_element_by_name("q")

接下来,我们发送了一个关键字,这个方法的作用类似于你用键盘输入关键字。 特殊的按键可以使用Keys类来输入,该类继承自 selenium.webdriver.common.keys, 为了安全起见,我们先清除input输入框中的任何预填充的文本(例如:”Search”),从而避免我们的搜索结果受影响:

elem.clear()
elem.send_keys("pycon")
elem.send_keys(Keys.RETURN)

提交页面后,你会得到所有的结果。为了确保某些特定的结果被找到,使用assert如下:

assert "No results found." not in driver.page_source

最后,关闭浏览器窗口,你还可以使用quit方法代替close方法, quit将关闭整个浏览器,而_close——只会关闭一个标签页, 如果你只打开了一个标签页,大多数浏览器的默认行为是关闭浏览器:

driver.close()

用Selenium写测试用例

Selenium 通常被用来写一些测试用例. selenium 包本身不提供测试工具或者框架. 你可以使用Python自带的模块unittest写测试用例。 The other options for a tool/framework are py.test and nose.

在本章中,我们使用 unittest 来编写测试代码,下面是一个已经写好的用例。 这是一个在 python.org 站点上搜索的案例:

import unittest
from selenium import webdriver
from selenium.webdriver.common.keys import Keys

class PythonOrgSearch(unittest.TestCase):

    def setUp(self):
        self.driver = webdriver.Firefox()

    def test_search_in_python_org(self):
        driver = self.driver
        driver.get("http://www.python.org")
        self.assertIn("Python", driver.title)
        elem = driver.find_element_by_name("q")
        elem.send_keys("pycon")
        elem.send_keys(Keys.RETURN)
        assert "No results found." not in driver.page_source


    def tearDown(self):
        self.driver.close()

if __name__ == "__main__":
    unittest.main()

你可以在shell中运行下列代码:

python test_python_org_search.py
.
----------------------------------------------------------------------
Ran 1 test in 15.566s

OK

结果表明这个测试用例已经成功运行。

逐步解释测试代码

一开始,我们引入了需要的模块, unittest 模块是基于JAVA JUnit的Python内置的模块。 该模块提供了一个框架去组织测试用例。 selenium.webdriver 模块提供了所有WebDriver的实现。 现在支持的WebDriver有:Firefox, Chrome, IE and Remote. Keys 类提供所有的键盘按键操作,比如像这样的:

RETURN, F1, ALT等。

import unittest
from selenium import webdriver
from selenium.webdriver.common.keys import Keys

该测试类继承自 unittest.TestCase. 继承 TestCase 类是告诉 unittest 模块该类是一个测试用例:

class PythonOrgSearch(unittest.TestCase):

setUp 方法是初始化的一部分, 该方法会在该测试类中的每一个测试方法被执行前都执行一遍。 下面创建了一个Firefox WebDriver的一个实例。

def setUp(self):
    self.driver = webdriver.Firefox()

这是一个测试用例实际的测试方法. 测试方法始终以test开头。 在该方法中的第一行创建了一个在 setUp方法中创建的驱动程序对象的本地引用。

def test_search_in_python_org(self):
    driver = self.driver

driver.get 方法将会根据方法中给出的URL地址打开该网站。 WebDriver 会等待整个页面加载完成(其实是等待”onload”事件执行完毕)之后把控制权交给测试程序。 如果你的页面使用大量的AJAX技术来加载页面,WebDriver可能不知道什么时候页面已经加载完成:

driver.get("http://www.python.org")

下面一行使用assert断言的方法判断在页面标题中是否包含 “Python”

self.assertIn("Python", driver.title)

WebDriver 提供很多方法去查找页面值的元素,这些方法都以 find_element_by_* 开头。 例如:包含 name 属性的input元素可以使用

find_element_by_name方法查找到。详细的细节可以参照 :ref:locating-elements 章节:

elem = driver.find_element_by_name("q")

接下来我们发送keys,这个和使用键盘输入keys类似。 特殊的按键可以通过引入selenium.webdriver.common.keys的 Keys 类来输入

elem.send_keys("pycon")
elem.send_keys(Keys.RETURN)

提交页面之后,无论如何你都会得到搜索结果,为了确保某些结果类检索到,可以使用下列断言 After submission of the page, you should get result as per search if

assert "No results found." not in driver.page_source

tearDown 方法会在每一个测试方法执行之后被执行。 该方法可以用来做一些清扫工作,比如关闭浏览器。 当然你也可以调用 quit 方法代替close方法,

quit 将关闭整个浏览器,而close只会关闭一个标签页, 如果你只打开了一个标签页,大多数浏览器的默认行为是关闭浏览器。

def tearDown(self):
    self.driver.close()

下面是入口函数:

if __name__ == "__main__":
    unittest.main()

使用远程 Selenium WebDriver

为了使用远程 WebDriver, 你应该拥有一个正在运行的 Selenium 服务器。 通过下列命令运行服务器:

java -jar selenium-server-standalone-2.x.x.jar

Selenium 服务运行后, 你会看到这样的提示信息:

15:43:07.541 INFO - RemoteWebDriver instances should connect to: http://127.0.0.1:4444/wd/hub

上面一行告诉你,你可以通过这个URL连接到远程WebDriver, 下面是一些例子:

from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities

driver = webdriver.Remote(
   command_executor='http://127.0.0.1:4444/wd/hub',
   desired_capabilities=DesiredCapabilities.CHROME)

driver = webdriver.Remote(
   command_executor='http://127.0.0.1:4444/wd/hub',
   desired_capabilities=DesiredCapabilities.OPERA)

driver = webdriver.Remote(
   command_executor='http://127.0.0.1:4444/wd/hub',
   desired_capabilities=DesiredCapabilities.HTMLUNITWITHJS)

desired_capabilities是一个字典,如果你不想使用默认的字典,你可以明确指定的值

driver = webdriver.Remote(
   command_executor='http://127.0.0.1:4444/wd/hub',
   desired_capabilities={'browserName': 'htmlunit',
                         'version': '2',
                        'javascriptEnabled': True})

导航

你想做的第一件事也许是使用WebDriver打开一个链接。 常规的方法是调用 get 方法:

driver.get("http://www.google.com")

WebDriver 将等待,直到页面完全加载完毕(其实是等到 onload 方法执行完毕), 然后返回继续执行你的脚本。 值得注意的是,如果你的页面使用了大量的Ajax加载, WebDriver可能不知道什么时候页面已经完全加载。 如果你想确保也main完全加载完毕,可以使用:ref:waits

与页面交互

只是打开页面其实并没有什么卵用。我们真正想要的是与页面做交互。 更具体地说,对于一个页面中的HTML元素,首先我们要找到他。WebDriver 提供了大量的方法帮助你去查找元素,例如:已知一个元素定义如下:

<input type="text" name="passwd" id="passwd-id" />

你可以通过下面的方法查找他:

element = driver.find_element_by_id("passwd-id")
element = driver.find_element_by_name("passwd")
element = driver.find_element_by_xpath("//input[@id='passwd-id']")
element = driver.find_element_by_css_selector("input#passwd-id")

你还可以通过链接的文本查找他,需要注意的是,这个文本必须完全匹地配。 当你使用XPATH时,你必须注意,如果匹配超过一个元素,只返回第一个元素。 如果上面也没找到,将会抛出 NoSuchElementException异常。

WebDriver有一个”基于对象”的API; 我们使用相同的接口表示所有类型的元素。 这就意味着,当你打开你的IDE的自动补全的时候,你会有很多可以调用的方法。 但是并不是所有的方法都是有意义或是有效的。不过不要担心! 当你调用一些毫无意义的方法时,WebDriver会尝试去做一些正确的事情(例如你对一个”meta” 元素调用”setSelected()”方法的时候)。

所以,当你拿到元素时,你能做什么呢?首先,你可能会想在文本框中输入一些内容:

element.send_keys("some text")

你还可以通过”Keys”类来模式输入方向键:

element.send_keys(" and some", Keys.ARROW_DOWN)

对于任何元素,他可能都叫 send_keys ,这就使得它可以测试键盘快捷键, 比如当你使用Gmail的时候。但是有一个副作用是当你输入一些文本时,这些 输入框中原有的文本不会被自动清除掉,相反,你的输入会继续添加到已存在文本之后。 你可以很方便的使用 clear 方法去清除input或者textarea元素中的内容:

element.clear()

填写表格

我们已经知道如何在input或textarea元素中输入内容,但是其他元素怎么办? 你可以“切换”下拉框的状态,你可以使用setSelected方法去做一些事情,比如 选择下拉列表,处理SELECT元素其实没有那么麻烦:

element = driver.find_element_by_xpath("//select[@name='name']")
all_options = element.find_elements_by_tag_name("option")
for option in all_options:
    print("Value is: %s" % option.get_attribute("value"))
    option.click()

上面这段代码将会寻找页面第一个 “SELECT” 元素, 并且循环他的每一个OPTION元素, 打印从它们的值,然后按顺序都选中一遍。

正如你说看到的那样,这不是处理 SELECT 元素最好的方法。WebDriver的支持类包括一个叫做 Select的类,他提供有用的方法处理这些内容:

from selenium.webdriver.support.ui import Select
select = Select(driver.find_element_by_name('name'))
select.select_by_index(index)
select.select_by_visible_text("text")
select.select_by_value(value)

WebDriver 也提供一些有用的方法来取消选择已经选择的元素:

select = Select(driver.find_element_by_id('id'))
select.deselect_all()

这将取消选择所以的OPTION。

假设在一个案例中,我们需要列出所有已经选择的选项,Select类提供了方便的方法来实现这一点:

select = Select(driver.find_element_by_xpath("xpath"))
all_selected_options = select.all_selected_options

获得所以选项:

options = select.options

一旦你填写完整个表单,你应该想去提交它,有一个方法就是去找到一个“submit” 按钮然后点击它:

# Assume the button has the ID "submit" :)
driver.find_element_by_id("submit").click()

或者,WebDriver对每一个元素都有一个叫做 “submit” 的方法,如果你在一个表单内的 元素上使用该方法,WebDriver会在DOM树上就近找到最近的表单,返回提交它。 如果调用的元素不再表单内,将会抛出NoSuchElementException异常:

element.submit()

拖放

您可以使用拖放,无论是移动一个元素,或放到另一个元素内:

element = driver.find_element_by_name("source")
target = driver.find_element_by_name("target")

from selenium.webdriver import ActionChains
action_chains = ActionChains(driver)
action_chains.drag_and_drop(element, target).perform()

在不同的窗口和框架之间移动

对于现在的web应用来说,没有任何frames或者只包含一个window窗口是比较罕见的。 WebDriver 支持在不同的窗口之间移动,只需要调用switch_to_window方法即可:

driver.switch_to_window("windowName")

所有的 driver 将会指向当前窗口,但是你怎么知道当前窗口的名字呢,查看打开他的javascript或者连接代码:

<a href="somewhere.html" target="windowName">Click here to open a new window</a>

或者,你可以在”switch_to_window()”中使用”窗口句柄”来打开它, 知道了这些,你就可以迭代所有已经打开的窗口了:

for handle in driver.window_handles:
    driver.switch_to_window(handle)

你还可以在不同的frame中切换 (or into iframes):

driver.switch_to_frame("frameName")

通过“.”操作符你还可以获得子frame,并通过下标指定任意frame,就像这样:

driver.switch_to_frame("frameName.0.child")

如何获取名叫“frameName”的frame中名叫 “child”的子frame呢? 来自*top*frame的所有的frame都会被评估All frames are evaluated as if from *top*.

一旦我们完成了frame中的工作,我们可以这样返回父frame:

driver.switch_to_default_content()

弹出对话框

Selenium WebDriver 内置了对处理弹出对话框的支持。 在你的某些动作之后可能会触发弹出对话框,你可以像下面这样访问对话框:

alert = driver.switch_to_alert()

它将返回当前打开的对话框对象。使用此对象,您现在可以接受、排除、读取其内容, 甚至可以在prompt对话框中输入(译注:prompt()是对话框的一种,不同于alert()对话框,不同点可以自行百度)。 这个接口对alert, confirm, prompt 对话框效果相同。 参考相关的API文档获取更多信息。

访问浏览器历史记录

在之前的文章中,我们使用get命令打开一个页面, ( driver.get("http://www.example.com")),WebDriver有很多更小的,以任务为导向的接口, navigation就是一个有用的任务,打开一个页面你可以使用get:

driver.get("http://www.example.com") 

在浏览历史中前进和后退你可以使用:

driver.forward()
driver.back()

请注意,这个功能完全取决于底层驱动程序。当你调用这些方法的时候,很有可能会发生意想不到的事情, 如果你习惯了浏览器的这些行为于其他的不同。

操作Cookies

在我们结束这一节之前,或许你对如何操作Cookies可能会很感兴趣。 首先,你需要打开一个页面,因为Cookie是在某个域名下才生效的:

# Go to the correct domain
driver.get("http://www.example.com")

# Now set the cookie. This one's valid for the entire domain 现在设置Cookies,这个cookie在域名根目录下(”/”)生效
cookie = {‘name’ : ‘foo’, ‘value’ : ‘bar’}
driver.add_cookie(cookie)

# And now output all the available cookies for the current URL 现在获取所有当前URL下可获得的Cookies
driver.get_cookies()

查找元素

在一个页面中有很多不同的策略可以定位一个元素。在你的项目中, 你可以选择最合适的方法去查找元素。Selenium提供了下列的方法给你:

  • find_element_by_id
  • find_element_by_name
  • find_element_by_xpath
  • find_element_by_link_text
  • find_element_by_partial_link_text
  • find_element_by_tag_name
  • find_element_by_class_name
  • find_element_by_css_selector

一次查找多个元素 (这些方法会返回一个list列表):

  • find_elements_by_name
  • find_elements_by_xpath
  • find_elements_by_link_text
  • find_elements_by_partial_link_text
  • find_elements_by_tag_name
  • find_elements_by_class_name
  • find_elements_by_css_selector

除了上述的公共方法,下面还有两个私有方法,在你查找也页面元素的时候也许有用。 他们是 find_element 和 find_elements 。

用法示例:

from selenium.webdriver.common.by import By

driver.find_element(By.XPATH, '//button[text()="Some text"]')
driver.find_elements(By.XPATH, '//button')

下面是 By 类的一些可用属性:

ID = "id"
XPATH = "xpath"
LINK_TEXT = "link text"
PARTIAL_LINK_TEXT = "partial link text"
NAME = "name"
TAG_NAME = "tag name"
CLASS_NAME = "class name"
CSS_SELECTOR = "css selector"

通过ID查找元素

当你知道一个元素的 id 时,你可以使用本方法。在该策略下,页面中第一个该 id 元素 会被匹配并返回。如果找不到任何元素,会抛出 NoSuchElementException 异常。

作为示例,页面元素如下所示:

<html>
 <body>
  <form id="loginForm">
   <input name="username" type="text" />
   <input name="password" type="password" />
   <input name="continue" type="submit" value="Login" />
  </form>
 </body>
<html>

可以这样查找表单(form)元素:

login_form = driver.find_element_by_id('loginForm')

通过Name查找元素

当你知道一个元素的 name 时,你可以使用本方法。在该策略下,页面中第一个该 name 元素 会被匹配并返回。如果找不到任何元素,会抛出 NoSuchElementException 异常。

作为示例,页面元素如下所示:

<html>
 <body>
  <form id="loginForm">
   <input name="username" type="text" />
   <input name="password" type="password" />
   <input name="continue" type="submit" value="Login" />
   <input name="continue" type="button" value="Clear" />
  </form>
</body>
<html>

name属性为 username & password 的元素可以像下面这样查找:

username = driver.find_element_by_name('username')
password = driver.find_element_by_name('password')

这会得到 “Login” 按钮,因为他在 “Clear” 按钮之前:

continue = driver.find_element_by_name('continue')

通过XPath查找元素

XPath是XML文档中查找结点的语法。因为HTML文档也可以被转换成XML(XHTML)文档, Selenium的用户可以利用这种强大的语言在web应用中查找元素。 XPath扩展了(当然也支持)这种通过id或name属性获取元素的简单方式,同时也开辟了各种新的可能性, 例如获取页面上的第三个复选框。

使用XPath的主要原因之一就是当你想获取一个既没有id属性也没有name属性的元素时, 你可以通过XPath使用元素的绝对位置来获取他(这是不推荐的),或相对于有一个id或name属性的元素 (理论上的父元素)的来获取你想要的元素。XPath定位器也可以通过非id和name属性查找元素。

绝对的XPath是所有元素都从根元素的位置(HTML)开始定位,只要应用中有轻微的调整,会就导致你的定位失败。 但是通过就近的包含id或者name属性的元素出发定位你的元素,这样相对关系就很靠谱, 因为这种位置关系很少改变,所以可以使你的测试更加强大。

作为示例,页面元素如下所示:

<html>
 <body>
  <form id="loginForm">
   <input name="username" type="text" />
   <input name="password" type="password" />
   <input name="continue" type="submit" value="Login" />
   <input name="continue" type="button" value="Clear" />
  </form>
</body>
<html>

可以这样查找表单(form)元素:

login_form = driver.find_element_by_xpath("/html/body/form[1]")
login_form = driver.find_element_by_xpath("//form[1]")
login_form = driver.find_element_by_xpath("//form[@id='loginForm']")
  1. 绝对定位 (页面结构轻微调整就会被破坏)
  2. HTML页面中的第一个form元素
  3. 包含 id 属性并且其值为 loginForm 的form元素

username元素可以如下获取:

username = driver.find_element_by_xpath("//form[input/@name='username']")
username = driver.find_element_by_xpath("//form[@id='loginForm']/input[1]")
username = driver.find_element_by_xpath("//input[@name='username']")
  1. 第一个form元素中包含name属性并且其值为 username 的input元素
  2. id为 loginForm 的form元素的第一个input子元素
  3. 第一个name属性为 username 的input元素

“Clear” 按钮可以如下获取:

clear_button = driver.find_element_by_xpath("//input[@name='continue'][@type='button']")
clear_button = driver.find_element_by_xpath("//form[@id='loginForm']/input[4]")
  1. Input with attribute named name and the value continue and attribute named type and the value button
  2. Fourth input child element of the form element with attribute named id and value loginForm

这些实例都是一些举出用法, 下面这些参考资料:

还有一些非常有用的插件,可以协助发现元素的XPath:

  • XPath Checker - suggests XPath and can be used to test XPath results.
  • Firebug - XPath suggestions are just one of the many powerful features of this very useful add-on.
  • XPath Helper - for Google Chrome

通过链接文本获取超链接

当你知道在一个锚标签中使用的链接文本时使用这个。 在该策略下,页面中第一个匹配链接内容锚标签 会被匹配并返回。如果找不到任何元素,会抛出 NoSuchElementException 异常。

作为示例,页面元素如下所示:

<html>
 <body>
  <p>Are you sure you want to do this?</p>
  <a href="continue.html">Continue</a>
  <a href="cancel.html">Cancel</a>
</body>
<html>

continue.html 超链接可以被这样查找到:

continue_link = driver.find_element_by_link_text('Continue')
continue_link = driver.find_element_by_partial_link_text('Conti')

通过标签名查找元素

当你向通过标签名查找元素时使用这个。 在该策略下,页面中第一个匹配该标签名的元素 会被匹配并返回。如果找不到任何元素,会抛出 NoSuchElementException 异常。

作为示例,页面元素如下所示:

<html>
 <body>
  <h1>Welcome</h1>
  <p>Site content goes here.</p>
</body>
<html>

h1 元素可以如下查找:

heading1 = driver.find_element_by_tag_name('h1')

通过Class name 定位元素

当你向通过class name查找元素时使用这个。 在该策略下,页面中第一个匹配该class属性的元素 会被匹配并返回。如果找不到任何元素,会抛出 NoSuchElementException 异常。

作为示例,页面元素如下所示:

<html>
 <body>
  <p class="content">Site content goes here.</p>
</body>
<html>

p 元素可以如下查找:

content = driver.find_element_by_class_name('content')

通过CSS选择器查找元素

当你向通过CSS选择器查找元素时使用这个。 在该策略下,页面中第一个匹配该CSS 选择器的元素 会被匹配并返回。如果找不到任何元素,会抛出 NoSuchElementException 异常。

作为示例,页面元素如下所示:

<html>
 <body>
  <p class="content">Site content goes here.</p>
</body>
<html>

p 元素可以如下查找:

content = driver.find_element_by_css_selector('p.content')

等待页面加载完成(Waits)

现在的大多数的Web应用程序是使用Ajax技术。当一个页面被加载到浏览器时, 该页面内的元素可以在不同的时间点被加载。这使得定位元素变得困难, 如果元素不再页面之中,会抛出 ElementNotVisibleException 异常。 使用 waits, 我们可以解决这个问题。waits提供了一些操作之间的时间间隔- 主要是定位元素或针对该元素的任何其他操作。

Selenium Webdriver 提供两种类型的waits - 隐式和显式。 显式等待会让WebDriver等待满足一定的条件以后再进一步的执行。 而隐式等待让Webdriver等待一定的时间后再才是查找某元素。

显式等待

显式等待是你在代码中定义等待一定条件发生后再进一步执行你的代码。 最糟糕的案例是使用time.sleep(),它将条件设置为等待一个确切的时间段。 这里有一些方便的方法让你只等待需要的时间。WebDriverWait结合ExpectedCondition 是实现的一种方式。

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

driver = webdriver.Firefox()
driver.get("http://somedomain/url_that_delays_loading")
try:
    element = WebDriverWait(driver, 10).until(
        EC.presence_of_element_located((By.ID, "myDynamicElement"))
    )
finally:
    driver.quit()

在抛出TimeoutException异常之前将等待10秒或者在10秒内发现了查找的元素。 WebDriverWait 默认情况下会每500毫秒调用一次ExpectedCondition直到结果成功返回。 ExpectedCondition成功的返回结果是一个布尔类型的true或是不为null的返回值。

预期的条件

自动化的Web浏览器中一些常用的预期条件,下面列出的是每一个实现, Selenium Python binding都提供了一些方便的方法,这样你就不用去编写 expected_condition类或是创建至今的工具包去实现他们。

  • title_is
  • title_contains
  • presence_of_element_located
  • visibility_of_element_located
  • visibility_of
  • presence_of_all_elements_located
  • text_to_be_present_in_element
  • text_to_be_present_in_element_value
  • frame_to_be_available_and_switch_to_it
  • invisibility_of_element_located
  • element_to_be_clickable
  • staleness_of
  • element_to_be_selected
  • element_located_to_be_selected
  • element_selection_state_to_be
  • element_located_selection_state_to_be
  • alert_is_present
from selenium.webdriver.support import expected_conditions as EC

wait = WebDriverWait(driver, 10)
element = wait.until(EC.element_to_be_clickable((By.ID,'someid')))

expected_conditions 模块提供了一组预定义的条件供WebDriverWait使用。

Custom Wait Conditions

You can also create custom wait conditions when none of the previous convenience methods fit your requirements. A custom wait condition can be created using a class with _call_ method which returns False when the condition doesn’t match.

class element_has_css_class(object):
  """An expectation for checking that an element has a particular css class.

  locator - used to find the element
  returns the WebElement once it has the particular css class
  """
  def __init__(self, locator, css_class):
    self.locator = locator
    self.css_class = css_class

  def __call__(self, driver):
    element = driver.find_element(*self.locator)   # Finding the referenced element
    if self.css_class in element.get_attribute("class"):
        return element
    else:
        return False

# Wait until an element with id='myNewInput' has class 'myCSSClass'
wait = WebDriverWait(driver, 10)
element = wait.until(element_has_css_class((By.ID, 'myNewInput'), "myCSSClass")

隐式等待

如果某些元素不是立即可用的,隐式等待是告诉WebDriver去等待一定的时间后去查找元素。 默认等待时间是0秒,一旦设置该值,隐式等待是设置该WebDriver的实例的生命周期。

from selenium import webdriver

driver = webdriver.Firefox()
driver.implicitly_wait(10) # seconds
driver.get("http://somedomain/url_that_delays_loading")
myDynamicElement = driver.find_element_by_id("myDynamicElement")

页面对象

本章是一个针对页面对象设计模式的教程引导。 一个页面对象表示在你测试的WEB应用程序的用户界面上的区域。

使用页面对象模式的好处:

  • 创建可复用的代码以便于在多个测试用例间共享
  • 减少重复的代码量
  • 如果用户界面变化,只需要修改一处

测试用例

下面是一个在python.org网站搜索一个词并保证一些结果可以找到的测试用例。

import unittest
from selenium import webdriver
import page

class PythonOrgSearch(unittest.TestCase):
    """A sample test class to show how page object works"""

    def setUp(self):
        self.driver = webdriver.Firefox()
        self.driver.get("http://www.python.org")

    def test_search_in_python_org(self):
        """
        Tests python.org search feature. Searches for the word "pycon" then verified that some results show up.
        Note that it does not look for any particular text in search results page. This test verifies that
        the results were not empty.
        """

        #Load the main page. In this case the home page of Python.org.
        main_page = page.MainPage(self.driver)
        #Checks if the word "Python" is in title
        assert main_page.is_title_matches(), "python.org title doesn't match."
        #Sets the text of search textbox to "pycon"
        main_page.search_text_element = "pycon"
        main_page.click_go_button()
        search_results_page = page.SearchResultsPage(self.driver)
        #Verifies that the results page is not empty
            assert search_results_page.is_results_found(), "No results found."

    def tearDown(self):
        self.driver.close()

if __name__ == "__main__":
    unittest.main()

页面对象类

页面对象为每个网页模拟创建出一个对象。 按照此技术,在测试代码和技术实施之间的一个分离层被创建。

这个 page.py 看起来像这样:

from element import BasePageElement
from locators import MainPageLocators

class SearchTextElement(BasePageElement):
    """This class gets the search text from the specified locator"""

    #The locator for search box where search string is entered
    locator = 'q'


class BasePage(object):
    """Base class to initialize the base page that will be called from all pages"""

    def __init__(self, driver):
        self.driver = driver


class MainPage(BasePage):
    """Home page action methods come here. I.e. Python.org"""

    #Declares a variable that will contain the retrieved text
    search_text_element = SearchTextElement()

    def is_title_matches(self):
        """Verifies that the hardcoded text "Python" appears in page title"""
        return "Python" in self.driver.title

    def click_go_button(self):
        """Triggers the search"""
        element = self.driver.find_element(*MainPageLocators.GO_BUTTON)
        element.click()


class SearchResultsPage(BasePage):
    """Search results page action methods come here"""

    def is_results_found(self):
        # Probably should search for this text in the specific page
        # element, but as for now it works fine
        return "No results found." not in self.driver.page_source

页面元素

这个 element.py 看起来像这样:

from selenium.webdriver.support.ui import WebDriverWait


class BasePageElement(object):
    """Base page class that is initialized on every page object class."""

    def __set__(self, obj, value):
        """Sets the text to the value supplied"""
        driver = obj.driver
        WebDriverWait(driver, 100).until(
            lambda driver: driver.find_element_by_name(self.locator))
        driver.find_element_by_name(self.locator).send_keys(value)

    def __get__(self, obj, owner):
        """Gets the text of the specified object"""
        driver = obj.driver
        WebDriverWait(driver, 100).until(
            lambda driver: driver.find_element_by_name(self.locator))
        element = driver.find_element_by_name(self.locator)
        return element.get_attribute("value")

定位器

其中一个做法是,从它们正在使用的地方分离定位字符。在这个例子中,同一页面的定位器属于同一个类。

这个 locators.py 看起来像这样:

from selenium.webdriver.common.by import By

class MainPageLocators(object):
    """A class for main page locators. All main page locators should come here"""
    GO_BUTTON = (By.ID, 'submit')

class SearchResultsPageLocators(object):
    """A class for search results locators. All search results locators should come here"""
    pass

WebDriver API

Recommended Import Style

The API definitions in this chapter shows the absolute location of classes. However the recommended import style is as given below:

from selenium import webdriver

Then, you can access the classes like this:

webdriver.Firefox
webdriver.FirefoxProfile
webdriver.Chrome
webdriver.ChromeOptions
webdriver.Ie
webdriver.Opera
webdriver.PhantomJS
webdriver.Remote
webdriver.DesiredCapabilities
webdriver.ActionChains
webdriver.TouchActions
webdriver.Proxy

The special keys class (Keys) can be imported like this:

from selenium.webdriver.common.keys import Keys

The exception classes can be imported like this (Replace the TheNameOfTheExceptionClass with actual class name given below):

from selenium.common.exceptions import [TheNameOfTheExceptionClass]

Conventions used in the API

Some attributes are callable (or methods) and others are non-callable (properties). All the callable attributes are ending with round brackets.

Here is an example for property:

  • current_url

    URL of the current loaded page.

    Usage:

    driver.current_url

Here is an example for a method:

  • close()

    Closes the current window.

    Usage:

    driver.close()

Exceptions

Exceptions that may happen in all the webdriver code.

  • exceptionselenium.common.exceptions.ElementNotInteractableException(msg=None, screen=None, stacktrace=None)

    Bases: selenium.common.exceptions.InvalidElementStateExceptionThrown when an element is present in the DOM but interactions with that element will hit another element do to paint order

  • exceptionselenium.common.exceptions.ElementNotSelectableException(msg=None, screen=None, stacktrace=None)

    Bases: selenium.common.exceptions.InvalidElementStateExceptionThrown when trying to select an unselectable element.For example, selecting a ‘script’ element.

  • exceptionselenium.common.exceptions.ElementNotVisibleException(msg=None, screen=None, stacktrace=None)

    Bases: selenium.common.exceptions.InvalidElementStateExceptionThrown when an element is present on the DOM, but it is not visible, and so is not able to be interacted with.Most commonly encountered when trying to click or read text of an element that is hidden from view.

  • exceptionselenium.common.exceptions.ErrorInResponseException(response, msg)

    Bases: selenium.common.exceptions.WebDriverExceptionThrown when an error has occurred on the server side.This may happen when communicating with the firefox extension or the remote driver server.

  • exceptionselenium.common.exceptions.ImeActivationFailedException(msg=None, screen=None, stacktrace=None)

    Bases: selenium.common.exceptions.WebDriverExceptionThrown when activating an IME engine has failed.

  • exceptionselenium.common.exceptions.ImeNotAvailableException(msg=None, screen=None, stacktrace=None)

    Bases: selenium.common.exceptions.WebDriverExceptionThrown when IME support is not available. This exception is thrown for every IME-related method call if IME support is not available on the machine.

  • exceptionselenium.common.exceptions.InvalidArgumentException(msg=None, screen=None, stacktrace=None)

    Bases: selenium.common.exceptions.WebDriverExceptionThe arguments passed to a command are either invalid or malformed.

  • exceptionselenium.common.exceptions.InvalidCookieDomainException(msg=None, screen=None, stacktrace=None)

    Bases: selenium.common.exceptions.WebDriverExceptionThrown when attempting to add a cookie under a different domain than the current URL.

  • exceptionselenium.common.exceptions.InvalidElementStateException(msg=None, screen=None, stacktrace=None)

    Bases: selenium.common.exceptions.WebDriverException

  • exceptionselenium.common.exceptions.InvalidSelectorException(msg=None, screen=None, stacktrace=None)

    Bases: selenium.common.exceptions.NoSuchElementExceptionThrown when the selector which is used to find an element does not return a WebElement. Currently this only happens when the selector is an xpath expression and it is either syntactically invalid (i.e. it is not a xpath expression) or the expression does not select WebElements (e.g. “count(//input)”).

  • exceptionselenium.common.exceptions.InvalidSwitchToTargetException(msg=None, screen=None, stacktrace=None)

    Bases: selenium.common.exceptions.WebDriverExceptionThrown when frame or window target to be switched doesn’t exist.

  • exceptionselenium.common.exceptions.MoveTargetOutOfBoundsException(msg=None, screen=None, stacktrace=None)

    Bases: selenium.common.exceptions.WebDriverExceptionThrown when the target provided to the ActionsChains move() method is invalid, i.e. out of document.

  • exceptionselenium.common.exceptions.NoAlertPresentException(msg=None, screen=None, stacktrace=None)

    Bases: selenium.common.exceptions.WebDriverExceptionThrown when switching to no presented alert.This can be caused by calling an operation on the Alert() class when an alert is not yet on the screen.

  • exceptionselenium.common.exceptions.NoSuchAttributeException(msg=None, screen=None, stacktrace=None)

    Bases: selenium.common.exceptions.WebDriverExceptionThrown when the attribute of element could not be found.You may want to check if the attribute exists in the particular browser you are testing against. Some browsers may have different property names for the same property. (IE8’s .innerText vs. Firefox .textContent)

  • exceptionselenium.common.exceptions.NoSuchElementException(msg=None, screen=None, stacktrace=None)

    Bases: selenium.common.exceptions.WebDriverExceptionThrown when element could not be found.If you encounter this exception, you may want to check the following:Check your selector used in your find_by…Element may not yet be on the screen at the time of the find operation, (webpage is still loading) see selenium.webdriver.support.wait.WebDriverWait() for how to write a wait wrapper to wait for an element to appear.

  • exceptionselenium.common.exceptions.NoSuchFrameException(msg=None, screen=None, stacktrace=None)

    Bases: selenium.common.exceptions.InvalidSwitchToTargetExceptionThrown when frame target to be switched doesn’t exist.

  • exceptionselenium.common.exceptions.NoSuchWindowException(msg=None, screen=None, stacktrace=None)

    Bases: selenium.common.exceptions.InvalidSwitchToTargetExceptionThrown when window target to be switched doesn’t exist.To find the current set of active window handles, you can get a list of the active window handles in the following way:print driver.window_handles

  • exceptionselenium.common.exceptions.RemoteDriverServerException(msg=None, screen=None, stacktrace=None)

    Bases: selenium.common.exceptions.WebDriverException

  • exceptionselenium.common.exceptions.StaleElementReferenceException(msg=None, screen=None, stacktrace=None)

    Bases: selenium.common.exceptions.WebDriverExceptionThrown when a reference to an element is now “stale”.Stale means the element no longer appears on the DOM of the page.Possible causes of StaleElementReferenceException include, but not limited to:You are no longer on the same page, or the page may have refreshed since the element was located.The element may have been removed and re-added to the screen, since it was located. Such as an element being relocated. This can happen typically with a javascript framework when values are updated and the node is rebuilt.Element may have been inside an iframe or another context which was refreshed.

  • exceptionselenium.common.exceptions.TimeoutException(msg=None, screen=None, stacktrace=None)

    Bases: selenium.common.exceptions.WebDriverExceptionThrown when a command does not complete in enough time.

  • exceptionselenium.common.exceptions.UnableToSetCookieException(msg=None, screen=None, stacktrace=None)

    Bases: selenium.common.exceptions.WebDriverExceptionThrown when a driver fails to set a cookie.

  • exceptionselenium.common.exceptions.UnexpectedAlertPresentException(msg=None, screen=None, stacktrace=None, alert_text=None)

    Bases: selenium.common.exceptions.WebDriverExceptionThrown when an unexpected alert is appeared.Usually raised when when an expected modal is blocking webdriver form executing any more commands.

  • exceptionselenium.common.exceptions.UnexpectedTagNameException(msg=None, screen=None, stacktrace=None)

    Bases: selenium.common.exceptions.WebDriverExceptionThrown when a support class did not get an expected web element.

  • exceptionselenium.common.exceptions.WebDriverException(msg=None, screen=None, stacktrace=None)

    Bases: exceptions.ExceptionBase webdriver exception.

Action Chains

The ActionChains implementation,

  • classselenium.webdriver.common.action_chains.ActionChains(driver)

    Bases: object

    ActionChains are a way to automate low level interactions such as mouse movements, mouse button actions, key press, and context menu interactions. This is useful for doing more complex actions like hover over and drag and drop

    Generate user actions.

    When you call methods for actions on the ActionChains object, the actions are stored in a queue in the ActionChains object. When you call perform(), the events are fired in the order they are queued up.

    ActionChains can be used in a chain pattern:

    menu = driver.find_element_by_css_selector(".nav") 
    hidden_submenu = driver.find_element_by_css_selector(".nav #submenu1") ActionChains(driver).move_to_element(menu).click(hidden_submenu).perform() 

    Or actions can be queued up one by one, then performed.:

    menu = driver.find_element_by_css_selector(".nav") 
    hidden_submenu = driver.find_element_by_css_selector(".nav #submenu1") 
    actions = ActionChains(driver) 
    actions.move_to_element(menu) 
    actions.click(hidden_submenu) 
    actions.perform() 

    Either way, the actions are performed in the order they are called, one after another.

    • __init__(driver)

      Creates a new ActionChains.

      Args:driver: The WebDriver instance which performs user actions.

    • click(on_element=None)

      Clicks an element.

      Args:on_element: The element to click. If None, clicks on current mouse position.

    • click_and_hold(on_element=None)

      Holds down the left mouse button on an element.

      Args:on_element: The element to mouse down. If None, clicks on current mouse position.

    • context_click(on_element=None)

      Performs a context-click (right click) on an element.

      Args:on_element: The element to context-click. If None, clicks on current mouse position.

    • double_click(on_element=None)

      Double-clicks an element.

      Args:on_element: The element to double-click. If None, clicks on current mouse position.

    • drag_and_drop(source, target)

      Holds down the left mouse button on the source element,then moves to the target element and releases the mouse button.

      Args:source: The element to mouse down.

      target: The element to mouse up.

    • drag_and_drop_by_offset(source, xoffset, yoffset)

      Holds down the left mouse button on the source element,then moves to the target offset and releases the mouse button.

      Args:source: The element to mouse down.

      xoffset: X offset to move to.

      yoffset: Y offset to move to.

    • key_down(value, element=None)

      Sends a key press only, without releasing it.Should only be used with modifier keys (Control, Alt and Shift).

      Args:value: The modifier key to send. Values are defined in Keys class.

      element: The element to send keys. If None, sends a key to current focused element.Example, pressing ctrl+c:ActionChains(driver).key_down(Keys.CONTROL).send_keys('c').key_up(Keys.CONTROL).perform()

    • key_up(value, element=None)

      Releases a modifier key.

      Args:value: The modifier key to send. Values are defined in Keys class.

      element: The element to send keys. If None, sends a key to current focused element.Example, pressing ctrl+c:ActionChains(driver).key_down(Keys.CONTROL).send_keys('c').key_up(Keys.CONTROL).perform()

    • move_by_offset(xoffset, yoffset)

      Moving the mouse to an offset from current mouse position.

      Args:xoffset: X offset to move to, as a positive or negative integer.

      yoffset: Y offset to move to, as a positive or negative integer.

    • move_to_element(to_element)

      Moving the mouse to the middle of an element.

      Args:to_element: The WebElement to move to.

    • move_to_element_with_offset(to_element, xoffset, yoffset)

      Move the mouse by an offset of the specified element.Offsets are relative to the top-left corner of the element.

      Args:to_element: The WebElement to move to.

      xoffset: X offset to move to.

      yoffset: Y offset to move to.

    • pause(seconds)

      Pause all inputs for the specified duration in seconds

    • perform()

      Performs all stored actions.

    • release(on_element=None)

      Releasing a held mouse button on an element.

      Args:on_element: The element to mouse up. If None, releases on current mouse position.

    • reset_actions()

      Clears actions that are already stored locally and on the remote end

    • send_keys(*keys_to_send)

      Sends keys to current focused element.

      Args:keys_to_send: The keys to send. Modifier keys constants can be found in the ‘Keys’ class.

    • send_keys_to_element(element, *keys_to_send)

      Sends keys to an element.

      Args:element: The element to send keys.

      keys_to_send: The keys to send. Modifier keys constants can be found in the ‘Keys’ class.

Alerts

The Alert implementation.

class selenium.webdriver.common.alert.Alert(driver)

Bases: object

Allows to work with alerts.

Use this class to interact with alert prompts. It contains methods for dismissing, accepting, inputting, and getting text from alert prompts.

Accepting / Dismissing alert prompts:

Alert(driver).accept()
Alert(driver).dismiss()

Inputting a value into an alert prompt:

name_prompt = Alert(driver) name_prompt.send_keys(“Willian Shakesphere”) name_prompt.accept()

Reading a the text of a prompt for verification:

alert_text = Alert(driver).text self.assertEqual(“Do you wish to quit?”, alert_text)

  • __init__(driver)

    Creates a new Alert.

    Args:driver: The WebDriver instance which performs user actions.

  • accept()

    Accepts the alert available.

    Usage:

    Alert(driver).accept() # Confirm a alert dialog.
  • dismiss()

    Dismisses the alert available.

  • send_keys(keysToSend)

    Send Keys to the Alert.

    Args:keysToSend: The text to be sent to Alert.

  • text

    Gets the text of the Alert.

Special Keys

The Keys implementation.

The Keys implementation.

  • class selenium.webdriver.common.keys.Keys

    Bases: objectSet of special keys codes.

    • ADD = ‘\ue025’

    • ALT = ‘\ue00a’

    • ARROW_DOWN = ‘\ue015’

    • ARROW_LEFT = ‘\ue012’

    • ARROW_RIGHT = ‘\ue014’

    • ARROW_UP = ‘\ue013’

    • BACKSPACE = ‘\ue003’

    • BACK_SPACE = ‘\ue003’

    • CANCEL = ‘\ue001’

    • CLEAR = ‘\ue005’

    • COMMAND = ‘\ue03d’

    • CONTROL = ‘\ue009’

    • DECIMAL = ‘\ue028’

    • DELETE = ‘\ue017’

    • DIVIDE = ‘\ue029’

    • DOWN = ‘\ue015’

    • END = ‘\ue010’

    • ENTER = ‘\ue007’

    • EQUALS = ‘\ue019’

    • ESCAPE = ‘\ue00c’

    • F1 = ‘\ue031’

    • F10 = ‘\ue03a’

    • F11 = ‘\ue03b’

    • F12 = ‘\ue03c’

    • F2 = ‘\ue032’

    • F3 = ‘\ue033’

    • F4 = ‘\ue034’

    • F5 = ‘\ue035’

    • F6 = ‘\ue036’

    • F7 = ‘\ue037’

    • F8 = ‘\ue038’

    • F9 = ‘\ue039’

    • HELP = ‘\ue002’

    • HOME = ‘\ue011’

    • INSERT = ‘\ue016’

    • LEFT = ‘\ue012’

    • LEFT_ALT = ‘\ue00a’

    • LEFT_CONTROL = ‘\ue009’

    • LEFT_SHIFT = ‘\ue008’

    • META = ‘\ue03d’

    • MULTIPLY = ‘\ue024’

    • NULL = ‘\ue000’

    • NUMPAD0 = ‘\ue01a’

    • NUMPAD1 = ‘\ue01b’

    • NUMPAD2 = ‘\ue01c’

    • NUMPAD3 = ‘\ue01d’

    • NUMPAD4 = ‘\ue01e’

    • NUMPAD5 = ‘\ue01f’

    • NUMPAD6 = ‘\ue020’

    • NUMPAD7 = ‘\ue021’

    • NUMPAD8 = ‘\ue022’

    • NUMPAD9 = ‘\ue023’

    • PAGE_DOWN = ‘\ue00f’

    • PAGE_UP = ‘\ue00e’

    • PAUSE = ‘\ue00b’

    • RETURN = ‘\ue006’

    • RIGHT = ‘\ue014’

    • SEMICOLON = ‘\ue018’

    • SEPARATOR = ‘\ue026’

    • SHIFT = ‘\ue008’

    • SPACE = ‘\ue00d’

    • SUBTRACT = ‘\ue027’

    • TAB = ‘\ue004’

    • UP = ‘\ue013’

Locate elements By

These are the attributes which can be used to locate elements. See the 查找元素 chapter for example usages.

The By implementation.

  • class selenium.webdriver.common.by.By

    Bases: object

    Set of supported locator strategies.

    • CLASS_NAME = ‘class name’

    • CSS_SELECTOR = ‘css selector’

    • ID = ‘id’

    • LINK_TEXT = ‘link text’

    • NAME = ‘name’

    • PARTIAL_LINK_TEXT = ‘partial link text’

    • TAG_NAME = ‘tag name’

    • XPATH = ‘xpath’

Desired Capabilities

See the 使用远程 Selenium WebDriver section for example usages of desired capabilities.

The Desired Capabilities implementation.

  • class selenium.webdriver.common.desired_capabilities.DesiredCapabilities

    Bases: object

    Set of default supported desired capabilities.Use this as a starting point for creating a desired capabilities object for requesting remote webdrivers for connecting to selenium server or selenium grid.

    Usage Example:

    from selenium import webdriver
    
    selenium_grid_url = "http://198.0.0.1:4444/wd/hub"
    
    # Create a desired capabilities object as a starting point.
    capabilities = DesiredCapabilities.FIREFOX.copy()
    capabilities['platform'] = "WINDOWS"
    capabilities['version'] = "10"
    
    # Instantiate an instance of Remote WebDriver with the desired capabilities.
    driver = webdriver.Remote(desired_capabilities=capabilities,
                              command_executor=selenium_grid_url)

    Note: Always use ‘.copy()’ on the DesiredCapabilities object to avoid the side effects of altering the Global class instance.

    • ANDROID = {‘browserName’: ‘android’, ‘platform’: ‘ANDROID’, ‘version’: ‘’}

    • CHROME = {‘browserName’: ‘chrome’, ‘platform’: ‘ANY’, ‘version’: ‘’}

    • EDGE = {‘browserName’: ‘MicrosoftEdge’, ‘platform’: ‘WINDOWS’, ‘version’: ‘’}

    • FIREFOX = {‘acceptInsecureCerts’: True, ‘browserName’: ‘firefox’, ‘marionette’: True}

    • HTMLUNIT = {‘browserName’: ‘htmlunit’, ‘platform’: ‘ANY’, ‘version’: ‘’}

    • HTMLUNITWITHJS = {‘browserName’: ‘htmlunit’, ‘javascriptEnabled’: True, ‘platform’: ‘ANY’, ‘version’: ‘firefox’}

    • INTERNETEXPLORER = {‘browserName’: ‘internet explorer’, ‘platform’: ‘WINDOWS’, ‘version’: ‘’}

    • IPAD = {‘browserName’: ‘iPad’, ‘platform’: ‘MAC’, ‘version’: ‘’}

    • IPHONE = {‘browserName’: ‘iPhone’, ‘platform’: ‘MAC’, ‘version’: ‘’}

    • OPERA = {‘browserName’: ‘opera’, ‘platform’: ‘ANY’, ‘version’: ‘’}

    • PHANTOMJS = {‘browserName’: ‘phantomjs’, ‘javascriptEnabled’: True, ‘platform’: ‘ANY’, ‘version’: ‘’}

    • SAFARI = {‘browserName’: ‘safari’, ‘platform’: ‘MAC’, ‘version’: ‘’}

    • WEBKITGTK = {‘browserName’: ‘MiniBrowser’, ‘platform’: ‘ANY’, ‘version’: ‘’}

Touch Actions

The Touch Actions implementation

  • class selenium.webdriver.common.touch_actions.TouchActions(driver)

    Bases: object

    Generate touch actions. Works like ActionChains; actions are stored in the TouchActions object and are fired with perform().

    • __init__(driver)

      Creates a new TouchActions object.

      Args:driver: The WebDriver instance which performs user actions. It should be with touchscreen enabled.

    • double_tap(on_element)

      Double taps on a given element.

      Args:on_element: The element to tap.

    • flick(xspeed, yspeed)

      Flicks, starting anywhere on the screen.

      Args:xspeed: The X speed in pixels per second.yspeed: The Y speed in pixels per second.

    • flick_element(on_element, xoffset, yoffset, speed)

      Flick starting at on_element, and moving by the xoffset and yoffset with specified speed.

      Args:on_element: Flick will start at center of element.xoffset: X offset to flick to.yoffset: Y offset to flick to.speed: Pixels per second to flick.

    • long_press(on_element)

      Long press on an element.

      Args:on_element: The element to long press.

    • move(xcoord, ycoord)

      Move held tap to specified location.

      Args:xcoord: X Coordinate to move.ycoord: Y Coordinate to move.

    • perform()

      Performs all stored actions.

    • release(xcoord, ycoord)

      Release previously issued tap ‘and hold’ command at specified location.

      Args:xcoord: X Coordinate to release.ycoord: Y Coordinate to release.

    • scroll(xoffset, yoffset)

      Touch and scroll, moving by xoffset and yoffset.

      Args:xoffset: X offset to scroll to.yoffset: Y offset to scroll to.

    • scroll_from_element(on_element, xoffset, yoffset)

      Touch and scroll starting at on_element, moving by xoffset and yoffset.

      Args:on_element: The element where scroll starts.xoffset: X offset to scroll to.yoffset: Y offset to scroll to.

    • tap(on_element)

      Taps on a given element.

      Args:on_element: The element to tap.

    • tap_and_hold(xcoord, ycoord)

      Touch down at given coordinates.

      Args:xcoord: X Coordinate to touch down.ycoord: Y Coordinate to touch down.

Proxy

The Proxy implementation.

  • class selenium.webdriver.common.proxy.Proxy(raw=None)

    Bases: object

    Proxy contains information about proxy type and necessary proxy settings.

    • __init__(raw=None)

      Creates a new Proxy.

      Args:raw: raw proxy data. If None, default class values are used.

    • add_to_capabilities(capabilities)

      Adds proxy information as capability in specified capabilities.

      Args:capabilities: The capabilities to which proxy will be added.

    • auto_detect

      Returns autodetect setting.

    • autodetect = False

    • ftpProxy = ‘’

    • ftp_proxy

      Returns ftp proxy setting.

    • httpProxy = ‘’

    • http_proxy

      Returns http proxy setting.

    • noProxy = ‘’

    • no_proxy

      Returns noproxy setting.

    • proxyAutoconfigUrl = ‘’

    • proxyType = {‘ff_value’: 6, ‘string’: ‘UNSPECIFIED’}

    • proxy_autoconfig_url

      Returns proxy autoconfig url setting.

    • proxy_type

      Returns proxy type as ProxyType.

    • socksPassword = ‘’

    • socksProxy = ‘’

    • socksUsername = ‘’

    • socks_password

      Returns socks proxy password setting.

    • socks_proxy

      Returns socks proxy setting.

    • socks_username

      Returns socks proxy username setting.

    • sslProxy = ‘’

    • ssl_proxy

      Returns https proxy setting.

  • class selenium.webdriver.common.proxy.ProxyType

    Bases: object

    Set of possible types of proxy.

    • Each proxy type has 2 properties:

      ‘ff_value’ is value of Firefox profile preference, ‘string’ is id of proxy type.

    • classmethod load(value)

    • AUTODETECT = {‘ff_value’: 4, ‘string’: ‘AUTODETECT’}

    • DIRECT = {‘ff_value’: 0, ‘string’: ‘DIRECT’}

    • MANUAL = {‘ff_value’: 1, ‘string’: ‘MANUAL’}

    • PAC = {‘ff_value’: 2, ‘string’: ‘PAC’}

    • RESERVED_1 = {‘ff_value’: 3, ‘string’: ‘RESERVED1’}

    • SYSTEM = {‘ff_value’: 5, ‘string’: ‘SYSTEM’}

    • UNSPECIFIED = {‘ff_value’: 6, ‘string’: ‘UNSPECIFIED’}

  • class selenium.webdriver.common.proxy.ProxyTypeFactory

    Bases: object

    Factory for proxy types.

    • static make(ff_value, string)

Utilities

The Utils methods.

  • selenium.webdriver.common.utils.find_connectable_ip(host, port=None)

    Resolve a hostname to an IP, preferring IPv4 addresses.

    We prefer IPv4 so that we don’t change behavior from previous IPv4-only implementations, and because some drivers (e.g., FirefoxDriver) do not support IPv6 connections.

    If the optional port number is provided, only IPs that listen on the given port are considered.

    Args:host - A hostname.

    port - Optional port number.

    Returns:A single IP address, as a string. If any IPv4 address is found, one is returned. Otherwise, if any IPv6 address is found, one is returned. If neither, then None is returned.

  • selenium.webdriver.common.utils.free_port()

    Determines a free port using sockets.

  • selenium.webdriver.common.utils.is_connectable(port, host=’localhost’)

    Tries to connect to the server at port to see if it is running.

    Args:port - The port to connect.

  • selenium.webdriver.common.utils.is_url_connectable(port)

    Tries to connect to the HTTP server at /status path and specified port to see if it responds successfully.

    Args:port - The port to connect.

  • selenium.webdriver.common.utils.join_host_port(host, port)

    Joins a hostname and port together.This is a minimal implementation intended to cope with IPv6 literals. For example, _join_host_port(‘::1’, 80) == ‘[::1]:80’.

    Args:host - A hostname.

    port - An integer port.

  • selenium.webdriver.common.utils.keys_to_typing(value)

    Processes the values that will be typed in the element.

Service

  • class selenium.webdriver.common.service.Service(executable, port=0, log_file=-3, env=None, start_error_message=’’)

    Bases: object

    • __init__(executable, port=0, log_file=-3, env=None, start_error_message=’’)

      Initialize self. See help(type(self)) for accurate signature.

    • assert_process_still_running()

    • command_line_args()

    • is_connectable()

    • send_remote_shutdown_command()

    • start()

      Starts the Service.Exceptions:WebDriverException : Raised either when it can’t start the service or when it can’t connect to the service

    • stop()

      Stops the service.

    • service_url

      Gets the url of the Service

Application Cache

The ApplicationCache implementaion.

  • class selenium.webdriver.common.html5.application_cache.ApplicationCache(driver)

    Bases: object

    • __init__(driver)

      Creates a new Aplication Cache.

      Args:driver: The WebDriver instance which performs user actions.

    • CHECKING = 2

    • DOWNLOADING = 3

    • IDLE = 1

    • OBSOLETE = 5

    • UNCACHED = 0

    • UPDATE_READY = 4

    • status

      Returns a current status of application cache.

Firefox WebDriver

class selenium.webdriver.firefox.webdriver.WebDriver(firefox_profile=None, firefox_binary=None, timeout=30, capabilities=None, proxy=None, executable_path=’geckodriver’, options=None, service_log_path=’geckodriver.log’, firefox_options=None, service_args=None, desired_capabilities=None, log_path=None, keep_alive=True)

Bases: selenium.webdriver.remote.webdriver.WebDriver

__init__(firefox_profile=None, firefox_binary=None, timeout=30, capabilities=None, proxy=None, executable_path=’geckodriver’, options=None, service_log_path=’geckodriver.log’, firefox_options=None, service_args=None, desired_capabilities=None, log_path=None, keep_alive=True)

Starts a new local session of Firefox.

Based on the combination and specificity of the various keyword arguments, a capabilities dictionary will be constructed that is passed to the remote end.

The keyword arguments given to this constructor are helpers to more easily allow Firefox WebDriver sessions to be customised with different options. They are mapped on to a capabilities dictionary that is passed on to the remote end.

As some of the options, such as firefox_profile and options.profile are mutually exclusive, precedence is given from how specific the setting is. capabilities is the least specific keyword argument, followed by options, followed by firefox_binary and firefox_profile.

In practice this means that if firefox_profile and options.profile are both set, the selected profile instance will always come from the most specific variable. In this case that would be firefox_profile. This will result in options.profile to be ignored because it is considered a less specific setting than the top-level firefox_profile keyword argument. Similarily, if you had specified a capabilities[“moz:firefoxOptions”][“profile”] Base64 string, this would rank below options.profile.

  • context(context)

    Sets the context that Selenium commands are running in using a with statement. The state of the context on the server is saved before entering the block, and restored upon exiting it.

    Parameters: context – Context, may be one of the class properties CONTEXT_CHROME or CONTEXT_CONTENT.

Usage example:

  with selenium.context(selenium.CONTEXT_CHROME):
      # chrome scope
      ... do stuff ...
  • install_addon(path, temporary=None)

    Installs Firefox addon.

    Returns identifier of installed addon. This identifier can later be used to uninstall addon.

    Parameters: path – Absolute path to the addon that will be installed.

    Usage:

    driver.install_addon(‘/path/to/firebug.xpi’)

  • quit()

    Quits the driver and close every associated window.

  • set_context(context)

  • uninstall_addon(identifier)

    Uninstalls Firefox addon using its identifier.

    Usage:

    driver.uninstall_addon(‘addon@foo.com’)

  • CONTEXT_CHROME = ‘chrome’

  • CONTEXT_CONTENT = ‘content’

  • NATIVE_EVENTS_ALLOWED = True

  • firefox_profile

Firefox WebDriver Options

  • class selenium.webdriver.firefox.options.Log

    Bases: object

    • __init__()

      Initialize self. See help(type(self)) for accurate signature.

    • to_capabilities()

  • class selenium.webdriver.firefox.options.Options

    Bases: object

    • _init__()

      Initialize self. See help(type(self)) for accurate signature.

    • add_argument(argument)

      Add argument to be used for the browser process.

    • set_capability(name, value)

      Sets a capability.

    • set_headless(headless=True)

      Deprecated, options.headless = True

    • set_preference(name, value)

      Sets a preference.

    • to_capabilities()

      Marshals the Firefox options to a moz:firefoxOptions object.

    • KEY = ‘moz:firefoxOptions’

    • accept_insecure_certs

    • arguments

      Returns a list of browser process arguments.

    • binary

      Returns the FirefoxBinary instance

    • binary_location

      Returns the location of the binary.

    • capabilities

    • headless

      Returns whether or not the headless argument is set

    • preferences

      Returns a dict of preferences.

    • profile

      Returns the Firefox profile to use.

    • proxy

      returns Proxy if set otherwise None.

Firefox WebDriver Profile

exception selenium.webdriver.firefox.firefox_profile.AddonFormatError

Bases: Exception

Exception for not well-formed add-on manifest files

class selenium.webdriver.firefox.firefox_profile.FirefoxProfile(profile_directory=None)

Bases: object

  • __init__(profile_directory=None)

    Initialises a new instance of a Firefox Profile

    Args:profile_directory: Directory of profile that you want to use. If a directory is passed in it will be cloned and the cloned directory will be used by the driver when instantiated. This defaults to None and will create a new directory when object is created.

  • add_extension(extension=’webdriver.xpi’)

  • set_preference(key, value)

    sets the preference that we want in the profile.

  • set_proxy(proxy)

  • update_preferences()

  • ANONYMOUS_PROFILE_NAME = ‘WEBDRIVER_ANONYMOUS_PROFILE’

  • DEFAULT_PREFERENCES = None

  • accept_untrusted_certs

  • assume_untrusted_cert_issuer

  • encoded

    A zipped, base64 encoded string of profile directory for use with remote WebDriver JSON wire protocol

  • native_events_enabled

  • path

    Gets the profile directory that is currently being used

  • port

    Gets the port that WebDriver is working on

Firefox WebDriver Binary

class selenium.webdriver.firefox.firefox_binary.FirefoxBinary(firefox_path=None, log_file=None)

Bases: object

  • __init__(firefox_path=None, log_file=None)

    Creates a new instance of Firefox binary.

    Args:firefox_path - Path to the Firefox executable. By default, it will be detected from the standard locations.

    log_file - A file object to redirect the firefox process output to. It can be sys.stdout.Please note that with parallel run the output won’t be synchronous. By default, it will be redirected to /dev/null.

  • add_command_line_options(*args)

  • kill()

    Kill the browser.This is useful when the browser is stuck.

  • launch_browser(profile, timeout=30)

    Launches the browser for the given profile name. It is assumed the profile already exists.

  • which(fname)

    Returns the fully qualified path by searching Path of the given name

  • NO_FOCUS_LIBRARY_NAME = ‘x_ignore_nofocus.so’

Firefox WebDriver Extension Connection

  • exception selenium.webdriver.firefox.extension_connection.ExtensionConnectionError

    Bases: ExceptionAn internal error occurred int the extension.Might be caused by bad input or bugs in webdriver

  • class selenium.webdriver.firefox.extension_connection.ExtensionConnection(host, firefox_profile, firefox_binary=None, timeout=30)

    Bases: selenium.webdriver.remote.remote_connection.RemoteConnection

    • __init__(host, firefox_profile, firefox_binary=None, timeout=30)

      Initialize self. See help(type(self)) for accurate signature.

    • connect()

      Connects to the extension and retrieves the session id.

    • classmethod connect_and_quit()

      Connects to an running browser and quit immediately.

    • classmethod is_connectable()

      Trys to connect to the extension but do not retrieve context.

    • quit(sessionId=None)

Chrome WebDriver

  • class selenium.webdriver.chrome.webdriver.WebDriver(executable_path=’chromedriver’, port=0, options=None, service_args=None, desired_capabilities=None, service_log_path=None, chrome_options=None, keep_alive=True)

    Bases: selenium.webdriver.remote.webdriver.WebDriver

    Controls the ChromeDriver and allows you to drive the browser.

    You will need to download the ChromeDriver executable from http://chromedriver.storage.googleapis.com/index.html

    • __init__(executable_path=’chromedriver’, port=0, options=None, service_args=None, desired_capabilities=None, service_log_path=None, chrome_options=None, keep_alive=True)

      Creates a new instance of the chrome driver.

      Starts the service and then creates new instance of chrome driver.

      Args:executable_path - path to the executable. If the default is used it assumes the executable is in the $PATH

      port - port you would like the service to run, if left as 0, a free port will be found.

      options - this takes an instance of ChromeOptions

      service_args - List of args to pass to the driver service

      desired_capabilities - Dictionary object with non-browser specific capabilities only, such as “proxy” or “loggingPref”.

      service_log_path - Where to log information from the driver.

      chrome_options - Deprecated argument for options

      keep_alive - Whether to configure ChromeRemoteConnection to use HTTP keep-alive.

    • create_options()

    • execute_cdp_cmd(cmd, cmd_args)

      Execute Chrome Devtools Protocol command and get returned result

      The command and command args should follow chrome devtools protocol domains/commands, refer to link https://chromedevtools.github.io/devtools-protocol/

      Args:cmd: A str, command name

      cmd_args: A dict, command args. empty dict {} if there is no command args

      Usage:driver.execute_cdp_cmd(‘Network.getResponseBody’, {‘requestId’: requestId})

      Returns:A dict, empty dict {} if there is no result to return. For example to getResponseBody:{‘base64Encoded’: False, ‘body’: ‘response body string’}

    • get_network_conditions()

      Gets Chrome network emulation settings.

      Returns:A dict. For example:{‘latency’: 4, ‘download_throughput’: 2, ‘upload_throughput’: 2, ‘offline’: False}

    • launch_app(id)

      Launches Chrome app specified by id.

    • quit()

      Closes the browser and shuts down the ChromeDriver executable that is started when starting the ChromeDriver

    • set_network_conditions(**network_conditions)

      Sets Chrome network emulation settings.

      Args:network_conditions: A dict with conditions specification.

      Usage:
      driver.set_network_conditions(offline=False, latency=5, # additional latency (ms)

      download_throughput=500 * 1024, # maximal throughput

      upload_throughput=500 * 1024) # maximal throughputNote: ‘throughput’ can be used to set both (for download and upload).

Chrome WebDriver Service

class selenium.webdriver.chrome.service.Service(executable_path, port=0, service_args=None, log_path=None, env=None)

Bases: selenium.webdriver.common.service.Service

Object that manages the starting and stopping of the ChromeDriver

  • __init__(executable_path, port=0, service_args=None, log_path=None, env=None)

    Creates a new instance of the Service

    Args:executable_path : Path to the ChromeDriver

    port : Port the service is running on

    service_args : List of args to pass to the chromedriver service

    log_path : Path for the chromedriver service to log to

  • command_line_args()

Remote WebDriver

The WebDriver implementation.

  • class selenium.webdriver.remote.webdriver.WebDriver(command_executor=’http://127.0.0.1:4444/wd/hub', desired_capabilities=None, browser_profile=None, proxy=None, keep_alive=False, file_detector=None, options=None)

    Bases: object

    Controls a browser by sending commands to a remote server. This server is expected to be running the WebDriver wire protocol as defined at https://github.com/SeleniumHQ/selenium/wiki/JsonWireProtocol

    Attributes:session_id - String ID of the browser session started and controlled by this WebDriver.capabilities - Dictionaty of effective capabilities of this browser session as returnedby the remote server. See https://github.com/SeleniumHQ/selenium/wiki/DesiredCapabilitiescommand_executor - remote_connection.RemoteConnection object used to execute commands.error_handler - errorhandler.ErrorHandler object used to handle errors.

    • __init__(command_executor=’http://127.0.0.1:4444/wd/hub', desired_capabilities=None, browser_profile=None, proxy=None, keep_alive=False, file_detector=None, options=None)

      Create a new driver that will issue commands using the wire protocol.

      Args:command_executor - Either a string representing URL of the remote server or a customremote_connection.RemoteConnection object. Defaults to ‘http://127.0.0.1:4444/wd/hub’.

      desired_capabilities - A dictionary of capabilities to request whenstarting the browser session. Required parameter.

      browser_profile - A selenium.webdriver.firefox.firefox_profile.FirefoxProfile object.Only used if Firefox is requested. Optional.

      proxy - A selenium.webdriver.common.proxy.Proxy object. The browser session willbe started with given proxy settings, if possible. Optional.

      keep_alive - Whether to configure remote_connection.RemoteConnection to useHTTP keep-alive. Defaults to False.

      file_detector - Pass custom file detector object during instantiation. If None,then default LocalFileDetector() will be used.

      options - instance of a driver options.Options class

    • add_cookie(cookie_dict)

      Adds a cookie to your current session.

      Args:cookie_dict: A dictionary object, with required keys - “name” and “value”;

      optional keys - “path”, “domain”, “secure”, “expiry”Usage:driver.add_cookie({‘name’ : ‘foo’, ‘value’ : ‘bar’}) driver.add_cookie({‘name’ : ‘foo’, ‘value’ : ‘bar’, ‘path’ : ‘/’}) driver.add_cookie({‘name’ : ‘foo’, ‘value’ : ‘bar’, ‘path’ : ‘/’, ‘secure’:True})

    • back()

      Goes one step backward in the browser history.Usage:driver.back()

    • close()

      Closes the current window.Usage:driver.close()

    • create_web_element(element_id)

      Creates a web element with the specified element_id.

    • delete_all_cookies()

      Delete all cookies in the scope of the session.Usage:driver.delete_all_cookies()

    • delete_cookie(name)

      Deletes a single cookie with the given name.Usage:driver.delete_cookie(‘my_cookie’)

    • execute(driver_command, params=None)

      Sends a command to be executed by a command.CommandExecutor.

      Args:driver_command: The name of the command to execute as a string.

      params: A dictionary of named parameters to send with the command.

      Returns:The command’s JSON response loaded into a dictionary object.

    • execute_async_script(script, *args)

      Asynchronously Executes JavaScript in the current window/frame.

      Args:script: The JavaScript to execute.

      *args: Any applicable arguments for your JavaScript.

      Usage:

      script = “var callback = arguments[arguments.length - 1]; ” “window.setTimeout(function(){ callback(‘timeout’) }, 3000);”
      driver.execute_async_script(script)

    • execute_script(script, *args)

      Synchronously Executes JavaScript in the current window/frame.

      Args:script: The JavaScript to execute.

      *args: Any applicable arguments for your JavaScript.

      Usage:driver.execute_script(‘return document.title;’)

    • file_detector_context(file_detector_class, \args, \*kwargs)

      Overrides the current file detector (if necessary) in limited context. Ensures the original file detector is set afterwards.Example:with webdriver.file_detector_context(UselessFileDetector):someinput.send_keys(‘/etc/hosts’)

      Args:file_detector_class - Class of the desired file detector. If the class is differentfrom the current file_detector, then the class is instantiated with args and kwargs and used as a file detector during the duration of the context manager.

      args - Optional arguments that get passed to the file detector class duringinstantiation.

      kwargs - Keyword arguments, passed the same way as args.

    • find_element(by=’id’, value=None)

      Find an element given a By strategy and locator. Prefer the find_element_by_* methods when possible.

      Usage:
      element = driver.find_element(By.ID, ‘foo’)

      Return type:WebElement

    • find_element_by_class_name(name)

      Finds an element by class name.

      Args:name: The class name of the element to find.

      Returns:WebElement - the element if it was foundRaises:NoSuchElementException - if the element wasn’t found

      Usage:
      element = driver.find_element_by_class_name(‘foo’)

    • find_element_by_css_selector(css_selector)

      Finds an element by css selector.

      Args:css_selector - CSS selector string, ex: ‘a.nav#home’

      Returns:WebElement - the element if it was foundRaises:NoSuchElementException - if the element wasn’t found

      Usage:
      element = driver.find_element_by_css_selector(‘#foo’)

    • find_element_by_id(id_)

      Finds an element by id.

      Args:id_ - The id of the element to be found.

      Returns:WebElement - the element if it was foundRaises:NoSuchElementException - if the element wasn’t found

      Usage:
      element = driver.find_element_by_id(‘foo’)

    • find_element_by_link_text(link_text)

      Finds an element by link text.

      Args:link_text: The text of the element to be found.

      Returns:WebElement - the element if it was foundRaises:NoSuchElementException - if the element wasn’t found

      Usage:
      element = driver.find_element_by_link_text(‘Sign In’)

    • find_element_by_name(name)

      Finds an element by name.

      Args:name: The name of the element to find.

      Returns:WebElement - the element if it was foundRaises:NoSuchElementException - if the element wasn’t found

      Usage:
      element = driver.find_element_by_name(‘foo’)

    • find_element_by_partial_link_text(link_text)

      Finds an element by a partial match of its link text.

      Args:link_text: The text of the element to partially match on.

      Returns:WebElement - the element if it was foundRaises:NoSuchElementException - if the element wasn’t found

      Usage:
      element = driver.find_element_by_partial_link_text(‘Sign’)

    • find_element_by_tag_name(name)

      Finds an element by tag name.

      Args:name - name of html tag (eg: h1, a, span)

      Returns:WebElement - the element if it was foundRaises:NoSuchElementException - if the element wasn’t found

      Usage:
      element = driver.find_element_by_tag_name(‘h1’)

    • find_element_by_xpath(xpath)

      Finds an element by xpath.

      Args:xpath - The xpath locator of the element to find.

      Returns:WebElement - the element if it was foundRaises:NoSuchElementException - if the element wasn’t found

      Usage:
      element = driver.find_element_by_xpath(‘//div/td[1]’)

    • find_elements(by=’id’, value=None)

      Find elements given a By strategy and locator. Prefer the find_elements_by_* methods when possible.

      Usage:
      elements = driver.find_elements(By.CLASS_NAME, ‘foo’)

      Return type:list of WebElement

    • find_elements_by_class_name(name)

      Finds elements by class name.

      Args:name: The class name of the elements to find.

      Returns:list of WebElement - a list with elements if any was found. An empty list if not

      Usage:elements = driver.find_elements_by_class_name(‘foo’)

    • find_elements_by_css_selector(css_selector)

      Finds elements by css selector.

      Args:css_selector - CSS selector string, ex: ‘a.nav#home’

      Returns:list of WebElement - a list with elements if any was found. An empty list if not

      Usage:elements = driver.find_elements_by_css_selector(‘.foo’)

    • find_elements_by_id(id_)

      Finds multiple elements by id.

      Args:id_ - The id of the elements to be found.

      Returns:list of WebElement - a list with elements if any was found. An empty list if not

      Usage:elements = driver.find_elements_by_id(‘foo’)

    • find_elements_by_link_text(text)

      Finds elements by link text.

      Args:link_text: The text of the elements to be found.

      Returns:list of webelement - a list with elements if any was found. an empty list if not

      Usage:elements = driver.find_elements_by_link_text(‘Sign In’)

    • find_elements_by_name(name)

      Finds elements by name.

      Args:name: The name of the elements to find.

      Returns:list of webelement - a list with elements if any was found. an empty list if not

      Usage:elements = driver.find_elements_by_name(‘foo’)

    • find_elements_by_partial_link_text(link_text)

      Finds elements by a partial match of their link text.

      Args:link_text: The text of the element to partial match on.

      Returns:list of webelement - a list with elements if any was found. an empty list if not

      Usage:elements = driver.find_elements_by_partial_link_text(‘Sign’)

    • find_elements_by_tag_name(name)

      Finds elements by tag name.

      Args:name - name of html tag (eg: h1, a, span)

      Returns:list of WebElement - a list with elements if any was found. An empty list if not

      Usage:elements = driver.find_elements_by_tag_name(‘h1’)

    • find_elements_by_xpath(xpath)

      Finds multiple elements by xpath.

      Args:xpath - The xpath locator of the elements to be found.

      Returns:list of WebElement - a list with elements if any was found. An empty list if not

      Usage:elements = driver.find_elements_by_xpath(“//div[contains(@class, ‘foo’)]”)

    • forward()

      Goes one step forward in the browser history.

      Usage:driver.forward()

    • fullscreen_window()

      Invokes the window manager-specific ‘full screen’ operation

    • get(url)

      Loads a web page in the current browser session.

    • get_cookie(name)

      Get a single cookie by name.

      Returns the cookie if found, None if not.

      Usage:driver.get_cookie(‘my_cookie’)

    • get_cookies()

      Returns a set of dictionaries, corresponding to cookies visible in the current session.

      Usage:driver.get_cookies()

    • get_log(log_type)

      Gets the log for a given log type

      Args:log_type: type of log that which will be returned

      Usage:
      driver.get_log(‘browser’)
      driver.get_log(‘driver’)
      driver.get_log(‘client’)
      driver.get_log(‘server’)

    • get_screenshot_as_base64()

      Gets the screenshot of the current window as a base64 encoded stringwhich is useful in embedded images in HTML.

      Usage:driver.get_screenshot_as_base64()

    • get_screenshot_as_file(filename)

      Saves a screenshot of the current window to a PNG image file. ReturnsFalse if there is any IOError, else returns True. Use full paths in your filename.

      Args:filename: The full path you wish to save your screenshot to. This should end with a .png extension.Usage:driver.get_screenshot_as_file(‘/Screenshots/foo.png’)

    • get_screenshot_as_png()

      Gets the screenshot of the current window as a binary data.

      Usage:driver.get_screenshot_as_png()

    • get_window_position(windowHandle=’current’)

      Gets the x,y position of the current window.

      Usage:driver.get_window_position()

    • get_window_rect()

      Gets the x, y coordinates of the window as well as height and width of the current window.

      Usage:driver.get_window_rect()

    • get_window_size(windowHandle=’current’)

      Gets the width and height of the current window.

      Usage:driver.get_window_size()

    • implicitly_wait(time_to_wait)

      Sets a sticky timeout to implicitly wait for an element to be found,or a command to complete. This method only needs to be called one time per session. To set the timeout for calls to execute_async_script, see set_script_timeout.

      Args:time_to_wait: Amount of time to wait (in seconds)

      Usage:
      driver.implicitly_wait(30)

    • maximize_window()

      Maximizes the current window that webdriver is using

    • minimize_window()

      Invokes the window manager-specific ‘minimize’ operation

    • quit()

      Quits the driver and closes every associated window.

      Usage:
      driver.quit()

    • refresh()

      Refreshes the current page.Usage:driver.refresh()

    • save_screenshot(filename)

      Saves a screenshot of the current window to a PNG image file. ReturnsFalse if there is any IOError, else returns True. Use full paths in your filename.

      Args:filename: The full path you wish to save your screenshot to. This should end with a .png extension.

      Usage:
      driver.save_screenshot(‘/Screenshots/foo.png’)

    • set_page_load_timeout(time_to_wait)

      Set the amount of time to wait for a page load to completebefore throwing an error.

      Args:time_to_wait: The amount of time to wait

      Usage:
      driver.set_page_load_timeout(30)

    • set_script_timeout(time_to_wait)

      Set the amount of time that the script should wait during anexecute_async_script call before throwing an error.

      Args:time_to_wait: The amount of time to wait (in seconds)

      Usage:
      driver.set_script_timeout(30)

    • set_window_position(x, y, windowHandle=’current’)

      Sets the x,y position of the current window. (window.moveTo)

      Args:x: the x-coordinate in pixels to set the window positiony: the y-coordinate in pixels to set the window position

      Usage:
      driver.set_window_position(0,0)

    • set_window_rect(x=None, y=None, width=None, height=None)

      Sets the x, y coordinates of the window as well as height and width of the current window.

      Usage:
      driver.set_window_rect(x=10, y=10)
      driver.set_window_rect(width=100, height=200)
      driver.set_window_rect(x=10, y=10, width=100, height=200)

    • set_window_size(width, height, windowHandle=’current’)

      Sets the width and height of the current window. (window.resizeTo)

      Args:width: the width in pixels to set the window toheight: the height in pixels to set the window to

      Usage:driver.set_window_size(800,600)

    • start_client()

      Called before starting a new session. This method may be overridden to define custom startup behavior.

    • start_session(capabilities, browser_profile=None)

      Creates a new session with the desired capabilities.

      Args:browser_name - The name of the browser to request.version - Which browser version to request.platform - Which platform to request the browser on.javascript_enabled - Whether the new session should support JavaScript.

      browser_profile - A selenium.webdriver.firefox.firefox_profile.FirefoxProfile object. Only used if Firefox is requested.

    • stop_client()

      Called after executing a quit command. This method may be overridden to define custom shutdown behavior.

    • switch_to_active_element()

      Deprecated use driver.switch_to.active_element

    • switch_to_alert()

      Deprecated use driver.switch_to.alert

    • switch_to_default_content()

      Deprecated use driver.switch_to.default_content

    • switch_to_frame(frame_reference)

      Deprecated use driver.switch_to.frame

    • switch_to_window(window_name)

      Deprecated use driver.switch_to.window

    • application_cache

      Returns a ApplicationCache Object to interact with the browser app cache

    • current_url

      Gets the URL of the current page.

      Usage:
      driver.current_url

    • current_window_handle

      Returns the handle of the current window.

      Usage:
      driver.current_window_handle

    • desired_capabilities

      returns the drivers current desired capabilities being used

    • file_detector

    • log_types

      Gets a list of the available log types

      Usage:
      driver.log_types

    • mobile

    • name

      Returns the name of the underlying browser for this instance.

      Usage:
      name = driver.name

    • orientation

      Gets the current orientation of the device

      Usage:
      orientation = driver.orientation

    • page_source

      Gets the source of the current page.Usage:driver.page_source

    • switch_to

      Returns:SwitchTo: an object containing all options to switch focus into

      Usage:

      element = driver.switch_to.active_element 
      alert = driver.switch_to.alert 
      driver.switch_to.default_content() 
      driver.switch_to.frame(‘frame_name’) 
      driver.switch_to.frame(1) 
      driver.switch_to.frame(driver.find_elements_by_tag_name(“iframe”)[0]) 
      driver.switch_to.parent_frame() driver.switch_to.window(‘main’)
  • title

    Returns the title of the current page.

    Usage:

    title = driver.title

  • window_handles

    Returns the handles of all windows within the current session.

    Usage:
    driver.window_handles

Remote WebDriver WebElement

class selenium.webdriver.remote.webelement.WebElement(parent, id_, w3c=False)

Bases: object

Represents a DOM element.

Generally, all interesting operations that interact with a document will be performed through this interface.

All method calls will do a freshness check to ensure that the element reference is still valid. This essentially determines whether or not the element is still attached to the DOM. If this test fails, then an StaleElementReferenceException is thrown, and all future calls to this instance will fail.

  • __init__(parent, id_, w3c=False)

    Initialize self. See help(type(self)) for accurate signature.

  • clear()

    Clears the text if it’s a text entry element.

  • click()

    Clicks the element.

  • find_element(by=’id’, value=None)

    Find an element given a By strategy and locator. Prefer the find_element_by_* methods when possible.

    Usage:
    element = element.find_element(By.ID, ‘foo’)

    Return type:WebElement

  • find_element_by_class_name(name)

    Finds element within this element’s children by class name.

    Args:name: The class name of the element to find.

    Returns:WebElement - the element if it was foundRaises:NoSuchElementException - if the element wasn’t found

    Usage:
    element = element.find_element_by_class_name(‘foo’)

  • find_element_by_css_selector(css_selector)

    Finds element within this element’s children by CSS selector.

    Args:css_selector - CSS selector string, ex: ‘a.nav#home’

    Returns:WebElement - the element if it was foundRaises:NoSuchElementException - if the element wasn’t found

    Usage:
    element = element.find_element_by_css_selector(‘#foo’)

  • find_element_by_id(id_)

    Finds element within this element’s children by ID.

    Args:id_ - ID of child element to locate.

    Returns:WebElement - the element if it was found

    Raises:NoSuchElementException - if the element wasn’t found

    Usage:
    foo_element = element.find_element_by_id(‘foo’)

  • find_element_by_link_text(link_text)

    Finds element within this element’s children by visible link text.

    Args:link_text - Link text string to search for.

    Returns:WebElement - the element if it was foundRaises:NoSuchElementException - if the element wasn’t found

    Usage:element = element.find_element_by_link_text(‘Sign In’)

  • find_element_by_name(name)

    Finds element within this element’s children by name.

    Args:name - name property of the element to find.

    Returns:WebElement - the element if it was foundRaises:NoSuchElementException - if the element wasn’t found

    Usage:element = element.find_element_by_name(‘foo’)

  • find_element_by_partial_link_text(link_text)

    Finds element within this element’s children by partially visible link text.

    Args:link_text: The text of the element to partially match on.

    Returns:WebElement - the element if it was found

    Raises:NoSuchElementException - if the element wasn’t foundUsage:element = element.find_element_by_partial_link_text(‘Sign’)

  • find_element_by_tag_name(name)

    Finds element within this element’s children by tag name.

    Args:name - name of html tag (eg: h1, a, span)

    Returns:WebElement - the element if it was foundRaises:NoSuchElementException - if the element wasn’t found

    Usage:
    element = element.find_element_by_tag_name(‘h1’)

  • find_element_by_xpath(xpath)

    Finds element by xpath.

    Args:xpath - xpath of element to locate. “//input[@class=’myelement’]”Note: The base path will be relative to this element’s location.This will select the first link under this element.myelement.find_element_by_xpath(".//a")However, this will select the first link on the page.myelement.find_element_by_xpath("//a")

    Returns:WebElement - the element if it was foundRaises:NoSuchElementException - if the element wasn’t found

    Usage:
    element = element.find_element_by_xpath(‘//div/td[1]’)

  • find_elements(by=’id’, value=None)

    Find elements given a By strategy and locator. Prefer the find_elements_by_* methods when possible.

    Usage:
    element = element.find_elements(By.CLASS_NAME, ‘foo’)

    Return type:list of WebElement

  • find_elements_by_class_name(name)

    Finds a list of elements within this element’s children by class name.

    Args:name: The class name of the elements to find.

    Returns:list of WebElement - a list with elements if any was found. An empty list if not

    Usage:
    elements = element.find_elements_by_class_name(‘foo’)

  • find_elements_by_css_selector(css_selector)

    Finds a list of elements within this element’s children by CSS selector.

    Args:css_selector - CSS selector string, ex: ‘a.nav#home’

    Returns:list of WebElement - a list with elements if any was found. An empty list if not

    Usage:
    elements = element.find_elements_by_css_selector(‘.foo’)

  • find_elements_by_id(id_)

    Finds a list of elements within this element’s children by ID. Will return a list of webelements if found, or an empty list if not.

    Args:id_ - Id of child element to find.

    Returns:list of WebElement - a list with elements if any was found. An empty list if not

    Usage:
    elements = element.find_elements_by_id(‘foo’)

  • find_elements_by_link_text(link_text)

    Finds a list of elements within this element’s children by visible link text.

    Args:link_text - Link text string to search for.

    Returns:list of webelement - a list with elements if any was found. an empty list if not

    Usage:
    elements = element.find_elements_by_link_text(‘Sign In’)

  • find_elements_by_name(name)

    Finds a list of elements within this element’s children by name.

    Args:name - name property to search for.

    Returns:list of webelement - a list with elements if any was found. an empty list if not

    Usage:
    elements = element.find_elements_by_name(‘foo’)

  • find_elements_by_partial_link_text(link_text)

    Finds a list of elements within this element’s children by link text.

    Args:link_text: The text of the element to partial match on.

    Returns:list of webelement - a list with elements if any was found. an empty list if not

    Usage:
    elements = element.find_elements_by_partial_link_text(‘Sign’)

  • find_elements_by_tag_name(name)

    Finds a list of elements within this element’s children by tag name.

    Args:name - name of html tag (eg: h1, a, span)

    Returns:list of WebElement - a list with elements if any was found. An empty list if not

    Usage:
    elements = element.find_elements_by_tag_name(‘h1’)

  • find_elements_by_xpath(xpath)

    Finds elements within the element by xpath.

    Args:xpath - xpath locator string.Note: The base path will be relative to this element’s location.This will select all links under this element.myelement.find_elements_by_xpath(".//a")However, this will select all links in the page itself.myelement.find_elements_by_xpath("//a")

    Returns:list of WebElement - a list with elements if any was found. An empty list if not

    Usage:
    elements = element.find_elements_by_xpath(“//div[contains(@class, ‘foo’)]”)

  • get_attribute(name)

    Gets the given attribute or property of the element.This method will first try to return the value of a property with the given name. If a property with that name doesn’t exist, it returns the value of the attribute with the same name. If there’s no attribute with that name, None is returned.Values which are considered truthy, that is equals “true” or “false”, are returned as booleans. All other non-None values are returned as strings. For attributes or properties which do not exist, None is returned.

    Args:name - Name of the attribute/property to retrieve.

    Example:# Check if the "active" CSS class is applied to an element. is_active = "active" in target_element.get_attribute("class")

  • get_property(name)

    Gets the given property of the element.

    Args:name - Name of the property to retrieve.

    Example:text_length = target_element.get_property("text_length")

  • is_displayed()

    Whether the element is visible to a user.

  • is_enabled()

    Returns whether the element is enabled.

  • is_selected()

    Returns whether the element is selected.Can be used to check if a checkbox or radio button is selected.

  • screenshot(filename)

    Saves a screenshot of the current element to a PNG image file. ReturnsFalse if there is any IOError, else returns True. Use full paths in your filename.

    Args:filename: The full path you wish to save your screenshot to. This should end with a .png extension.

    Usage:
    element.screenshot(‘/Screenshots/foo.png’)

  • send_keys(*value)

    Simulates typing into the element.

    Args:value - A string for typing, or setting form fields. For setting file inputs, this could be a local file path.Use this to send simple key events or to fill out form fields:

    form_textfield = driver.find_element_by_name('username') 
    form_textfield.send_keys("admin") 

    This can also be used to set file inputs.

    file_input = driver.find_element_by_name('profilePic') 
    file_input.send_keys("path/to/profilepic.gif") # Generally it's better to wrap the file path in one of the methods 
    # in os.path to return the actual path to support cross OS testing. 
    # file_input.send_keys(os.path.abspath("path/to/profilepic.gif")) 
  • submit()

    Submits a form.

  • value_of_css_property(property_name)

    The value of a CSS property.

  • id

    Internal ID used by selenium.This is mainly for internal use. Simple use cases such as checking if 2 webelements refer to the same element, can be done using ==:

    if element1 == element2:    
    print("These 2 are equal") 
  • location

    The location of the element in the renderable canvas.

  • location_once_scrolled_into_view

    THIS PROPERTY MAY CHANGE WITHOUT WARNING. Use this to discover where on the screen an element is so that we can click it. This method should cause the element to be scrolled into view.Returns the top lefthand corner location on the screen, or None if the element is not visible.

  • parent

    Internal reference to the WebDriver instance this element was found from.

  • rect

    A dictionary with the size and location of the element.

  • screenshot_as_base64

    Gets the screenshot of the current element as a base64 encoded string.

    Usage:
    img_b64 = element.screenshot_as_base64

  • screenshot_as_png

    Gets the screenshot of the current element as a binary data.

    Usage:
    element_png = element.screenshot_as_png

  • size

    The size of the element.

  • tag_name

    This element’s tagName property.

  • text

    The text of the element.

Remote WebDriver Command

class selenium.webdriver.remote.command.Command

Bases: object

Defines constants for the standard WebDriver commands.

While these constants have no meaning in and of themselves, they are used to marshal commands through a service that implements WebDriver’s remote wire protocol:

https://github.com/SeleniumHQ/selenium/wiki/JsonWireProtocol

  • ACCEPT_ALERT = ‘acceptAlert’

  • ADD_COOKIE = ‘addCookie’

  • CLEAR_APP_CACHE = ‘clearAppCache’

  • CLEAR_ELEMENT = ‘clearElement’

  • CLEAR_LOCAL_STORAGE = ‘clearLocalStorage’

  • CLEAR_SESSION_STORAGE = ‘clearSessionStorage’

  • CLICK = ‘mouseClick’

  • CLICK_ELEMENT = ‘clickElement’

  • CLOSE = ‘close’

  • CONTEXT_HANDLES = ‘getContextHandles’

  • CURRENT_CONTEXT_HANDLE = ‘getCurrentContextHandle’

  • DELETE_ALL_COOKIES = ‘deleteAllCookies’

  • DELETE_COOKIE = ‘deleteCookie’

  • DELETE_SESSION = ‘deleteSession’

  • DISMISS_ALERT = ‘dismissAlert’

  • DOUBLE_CLICK = ‘mouseDoubleClick’

  • DOUBLE_TAP = ‘touchDoubleTap’

  • ELEMENT_SCREENSHOT = ‘elementScreenshot’

  • EXECUTE_ASYNC_SCRIPT = ‘executeAsyncScript’

  • EXECUTE_SCRIPT = ‘executeScript’

  • EXECUTE_SQL = ‘executeSql’

  • FIND_CHILD_ELEMENT = ‘findChildElement’

  • FIND_CHILD_ELEMENTS = ‘findChildElements’

  • FIND_ELEMENT = ‘findElement’

  • FIND_ELEMENTS = ‘findElements’

  • FLICK = ‘touchFlick’

  • FULLSCREEN_WINDOW = ‘fullscreenWindow’

  • GET = ‘get’

  • GET_ACTIVE_ELEMENT = ‘getActiveElement’

  • GET_ALERT_TEXT = ‘getAlertText’

  • GET_ALL_COOKIES = ‘getCookies’

  • GET_ALL_SESSIONS = ‘getAllSessions’

  • GET_APP_CACHE = ‘getAppCache’

  • GET_APP_CACHE_STATUS = ‘getAppCacheStatus’

  • GET_AVAILABLE_LOG_TYPES = ‘getAvailableLogTypes’

  • GET_COOKIE = ‘getCookie’

  • GET_CURRENT_URL = ‘getCurrentUrl’

  • GET_CURRENT_WINDOW_HANDLE = ‘getCurrentWindowHandle’

  • GET_ELEMENT_ATTRIBUTE = ‘getElementAttribute’

  • GET_ELEMENT_LOCATION = ‘getElementLocation’

  • GET_ELEMENT_LOCATION_ONCE_SCROLLED_INTO_VIEW = ‘getElementLocationOnceScrolledIntoView’

  • GET_ELEMENT_PROPERTY = ‘getElementProperty’

  • GET_ELEMENT_RECT = ‘getElementRect’

  • GET_ELEMENT_SIZE = ‘getElementSize’

  • GET_ELEMENT_TAG_NAME = ‘getElementTagName’

  • GET_ELEMENT_TEXT = ‘getElementText’

  • GET_ELEMENT_VALUE = ‘getElementValue’

  • GET_ELEMENT_VALUE_OF_CSS_PROPERTY = ‘getElementValueOfCssProperty’

  • GET_LOCAL_STORAGE_ITEM = ‘getLocalStorageItem’

  • GET_LOCAL_STORAGE_KEYS = ‘getLocalStorageKeys’

  • GET_LOCAL_STORAGE_SIZE = ‘getLocalStorageSize’

  • GET_LOCATION = ‘getLocation’

  • GET_LOG = ‘getLog’

  • GET_NETWORK_CONNECTION = ‘getNetworkConnection’

  • GET_PAGE_SOURCE = ‘getPageSource’

  • GET_SCREEN_ORIENTATION = ‘getScreenOrientation’

  • GET_SESSION_STORAGE_ITEM = ‘getSessionStorageItem’

  • GET_SESSION_STORAGE_KEYS = ‘getSessionStorageKeys’

  • GET_SESSION_STORAGE_SIZE = ‘getSessionStorageSize’

  • GET_TITLE = ‘getTitle’

  • GET_WINDOW_HANDLES = ‘getWindowHandles’

  • GET_WINDOW_POSITION = ‘getWindowPosition’

  • GET_WINDOW_RECT = ‘getWindowRect’

  • GET_WINDOW_SIZE = ‘getWindowSize’

  • GO_BACK = ‘goBack’

  • GO_FORWARD = ‘goForward’

  • IMPLICIT_WAIT = ‘implicitlyWait’

  • IS_ELEMENT_DISPLAYED = ‘isElementDisplayed’

  • IS_ELEMENT_ENABLED = ‘isElementEnabled’

  • IS_ELEMENT_SELECTED = ‘isElementSelected’

  • LONG_PRESS = ‘touchLongPress’

  • MAXIMIZE_WINDOW = ‘windowMaximize’

  • MINIMIZE_WINDOW = ‘minimizeWindow’

  • MOUSE_DOWN = ‘mouseButtonDown’

  • MOUSE_UP = ‘mouseButtonUp’

  • MOVE_TO = ‘mouseMoveTo’

  • NEW_SESSION = ‘newSession’

  • QUIT = ‘quit’

  • REFRESH = ‘refresh’

  • REMOVE_LOCAL_STORAGE_ITEM = ‘removeLocalStorageItem’

  • REMOVE_SESSION_STORAGE_ITEM = ‘removeSessionStorageItem’

  • SCREENSHOT = ‘screenshot’

  • SEND_KEYS_TO_ACTIVE_ELEMENT = ‘sendKeysToActiveElement’

  • SEND_KEYS_TO_ELEMENT = ‘sendKeysToElement’

  • SET_ALERT_CREDENTIALS = ‘setAlertCredentials’

  • SET_ALERT_VALUE = ‘setAlertValue’

  • SET_ELEMENT_SELECTED = ‘setElementSelected’

  • SET_LOCAL_STORAGE_ITEM = ‘setLocalStorageItem’

  • SET_LOCATION = ‘setLocation’

  • SET_NETWORK_CONNECTION = ‘setNetworkConnection’

  • SET_SCREEN_ORIENTATION = ‘setScreenOrientation’

  • SET_SCRIPT_TIMEOUT = ‘setScriptTimeout’

  • SET_SESSION_STORAGE_ITEM = ‘setSessionStorageItem’

  • SET_TIMEOUTS = ‘setTimeouts’

  • SET_WINDOW_POSITION = ‘setWindowPosition’

  • SET_WINDOW_RECT = ‘setWindowRect’

  • SET_WINDOW_SIZE = ‘setWindowSize’

  • SINGLE_TAP = ‘touchSingleTap’

  • STATUS = ‘status’

  • SUBMIT_ELEMENT = ‘submitElement’

  • SWITCH_TO_CONTEXT = ‘switchToContext’

  • SWITCH_TO_FRAME = ‘switchToFrame’

  • SWITCH_TO_PARENT_FRAME = ‘switchToParentFrame’

  • SWITCH_TO_WINDOW = ‘switchToWindow’

  • TOUCH_DOWN = ‘touchDown’

  • TOUCH_MOVE = ‘touchMove’

  • TOUCH_SCROLL = ‘touchScroll’

  • TOUCH_UP = ‘touchUp’

  • UPLOAD_FILE = ‘uploadFile’

  • W3C_ACCEPT_ALERT = ‘w3cAcceptAlert’

  • W3C_ACTIONS = ‘actions’

  • W3C_CLEAR_ACTIONS = ‘clearActionState’

  • W3C_DISMISS_ALERT = ‘w3cDismissAlert’

  • W3C_EXECUTE_SCRIPT = ‘w3cExecuteScript’

  • W3C_EXECUTE_SCRIPT_ASYNC = ‘w3cExecuteScriptAsync’

  • W3C_GET_ACTIVE_ELEMENT = ‘w3cGetActiveElement’

  • W3C_GET_ALERT_TEXT = ‘w3cGetAlertText’

  • W3C_GET_CURRENT_WINDOW_HANDLE = ‘w3cGetCurrentWindowHandle’

  • W3C_GET_WINDOW_HANDLES = ‘w3cGetWindowHandles’

  • W3C_GET_WINDOW_POSITION = ‘w3cGetWindowPosition’

  • W3C_GET_WINDOW_SIZE = ‘w3cGetWindowSize’

  • W3C_MAXIMIZE_WINDOW = ‘w3cMaximizeWindow’

  • W3C_SET_ALERT_VALUE = ‘w3cSetAlertValue’

  • W3C_SET_WINDOW_POSITION = ‘w3cSetWindowPosition’

  • W3C_SET_WINDOW_SIZE = ‘w3cSetWindowSize’

Remote WebDriver Error Handler

class selenium.webdriver.remote.errorhandler.ErrorCode

Bases: object

Error codes defined in the WebDriver wire protocol.

  • ELEMENT_CLICK_INTERCEPTED = [64, ‘element click intercepted’]

  • ELEMENT_IS_NOT_SELECTABLE = [15, ‘element not selectable’]

  • ELEMENT_NOT_INTERACTABLE = [60, ‘element not interactable’]

  • ELEMENT_NOT_VISIBLE = [11, ‘element not visible’]

  • IME_ENGINE_ACTIVATION_FAILED = [31, ‘ime engine activation failed’]

  • IME_NOT_AVAILABLE = [30, ‘ime not available’]

  • INSECURE_CERTIFICATE = [‘insecure certificate’]

  • INVALID_ARGUMENT = [61, ‘invalid argument’]

  • INVALID_COOKIE_DOMAIN = [24, ‘invalid cookie domain’]

  • INVALID_COORDINATES = [‘invalid coordinates’]

  • INVALID_ELEMENT_COORDINATES = [29, ‘invalid element coordinates’]

  • INVALID_ELEMENT_STATE = [12, ‘invalid element state’]

  • INVALID_SELECTOR = [32, ‘invalid selector’]

  • INVALID_SESSION_ID = [‘invalid session id’]

  • INVALID_XPATH_SELECTOR = [51, ‘invalid selector’]

  • INVALID_XPATH_SELECTOR_RETURN_TYPER = [52, ‘invalid selector’]

  • JAVASCRIPT_ERROR = [17, ‘javascript error’]

  • METHOD_NOT_ALLOWED = [405, ‘unsupported operation’]

  • MOVE_TARGET_OUT_OF_BOUNDS = [34, ‘move target out of bounds’]

  • NO_ALERT_OPEN = [27, ‘no such alert’]

  • NO_SUCH_COOKIE = [62, ‘no such cookie’]

  • NO_SUCH_ELEMENT = [7, ‘no such element’]

  • NO_SUCH_FRAME = [8, ‘no such frame’]

  • NO_SUCH_WINDOW = [23, ‘no such window’]

  • SCRIPT_TIMEOUT = [28, ‘script timeout’]

  • SESSION_NOT_CREATED = [33, ‘session not created’]

  • STALE_ELEMENT_REFERENCE = [10, ‘stale element reference’]

  • SUCCESS = 0

  • TIMEOUT = [21, ‘timeout’]

  • UNABLE_TO_CAPTURE_SCREEN = [63, ‘unable to capture screen’]

  • UNABLE_TO_SET_COOKIE = [25, ‘unable to set cookie’]

  • UNEXPECTED_ALERT_OPEN = [26, ‘unexpected alert open’]

  • UNKNOWN_COMMAND = [9, ‘unknown command’]

  • UNKNOWN_ERROR = [13, ‘unknown error’]

  • UNKNOWN_METHOD = [‘unknown method exception’]

  • XPATH_LOOKUP_ERROR = [19, ‘invalid selector’]

class selenium.webdriver.remote.errorhandler.ErrorHandler

Bases: object

Handles errors returned by the WebDriver server.

check_response(response)

Checks that a JSON response from the WebDriver does not have an error.

Args: response - The JSON response from the WebDriver server as a dictionary object.

Raises: If the response contains an error message.

Remote WebDriver Mobile

class selenium.webdriver.remote.mobile.Mobile(driver)

Bases: object

  • class ConnectionType(mask)

    Bases: object

    • __init__(mask)

      Initialize self. See help(type(self)) for accurate signature.

    • airplane_mode

    • data

    • wifi

  • __init__(driver)

    Initialize self. See help(type(self)) for accurate signature.

  • set_network_connection(network)

    Set the network connection for the remote device.Example of setting airplane mode:driver.mobile.set_network_connection(driver.mobile.AIRPLANE_MODE)

  • AIRPLANE_MODE = <selenium.webdriver.remote.mobile.Mobile.ConnectionType object>

  • ALL_NETWORK = <selenium.webdriver.remote.mobile.Mobile.ConnectionType object>

  • DATA_NETWORK = <selenium.webdriver.remote.mobile.Mobile.ConnectionType object>

  • WIFI_NETWORK = <selenium.webdriver.remote.mobile.Mobile.ConnectionType object>

  • context

    returns the current context (Native or WebView).

  • contexts

    returns a list of available contexts

  • network_connection

Remote WebDriver Remote Connection

lass selenium.webdriver.remote.remote_connection.RemoteConnection(remote_server_addr, keep_alive=False, resolve_ip=True)

Bases: object

A connection with the Remote WebDriver server.

Communicates with the server using the WebDriver wire protocol: https://github.com/SeleniumHQ/selenium/wiki/JsonWireProtocol

  • __init__(remote_server_addr, keep_alive=False, resolve_ip=True)

    Initialize self. See help(type(self)) for accurate signature.

  • execute(command, params)

    Send a command to the remote server.Any path subtitutions required for the URL mapped to the command should be included in the command parameters.

    Args:command - A string specifying the command to execute.params - A dictionary of named parameters to send with the command as its JSON payload.

  • classmethod get_remote_connection_headers(parsed_url, keep_alive=False)

    Get headers for remote request.

    Args:parsed_url - The parsed urlkeep_alive (Boolean) - Is this a keep-alive connection (default: False)

  • classmethod get_timeout()

    Returns:Timeout value in seconds for all http requests made to the Remote Connection

  • classmethod reset_timeout()

    Reset the http request timeout to socket._GLOBAL_DEFAULT_TIMEOUT

  • classmethod set_timeout(timeout)

    Override the default timeout

    Args:timeout - timeout value for http requests in seconds

Remote WebDriver Utils

  • selenium.webdriver.remote.utils.dump_json(json_struct)

  • selenium.webdriver.remote.utils.format_json(json_struct)

  • selenium.webdriver.remote.utils.load_json(s)

  • selenium.webdriver.remote.utils.unzip_to_temp_dir(zip_file_name)

    Unzip zipfile to a temporary directory.The directory of the unzipped files is returned if success, otherwise None is returned.

Internet Explorer WebDriver

class selenium.webdriver.ie.webdriver.WebDriver(executable_path=’IEDriverServer.exe’, capabilities=None, port=0, timeout=30, host=None, log_level=None, service_log_path=None, options=None, ie_options=None, desired_capabilities=None, log_file=None, keep_alive=False)

Bases: selenium.webdriver.remote.webdriver.WebDriver

Controls the IEServerDriver and allows you to drive Internet Explorer

  • __init__(executable_path=’IEDriverServer.exe’, capabilities=None, port=0, timeout=30, host=None, log_level=None, service_log_path=None, options=None, ie_options=None, desired_capabilities=None, log_file=None, keep_alive=False)

    Creates a new instance of the chrome driver.

    Starts the service and then creates new instance of chrome driver.

    Args:executable_path - path to the executable. If the default is used it assumes the executable is in the $PATH

    capabilities: capabilities Dictionary object

    port - port you would like the service to run, if left as 0, a free port will be found.

    timeout - no longer used, kept for backward compatibility

    host - IP address for the service

    log_level - log level you would like the service to run.

    service_log_path - target of logging of service, may be “stdout”, “stderr” or file path.

    options - IE Options instance, providing additional IE options

    ie_options - Deprecated argument for options

    desired_capabilities - alias of capabilities; this will make the signature consistent with RemoteWebDriver.

    log_file - Deprecated argument for service_log_path

    keep_alive - Whether to configure RemoteConnection to use HTTP keep-alive.

  • create_options()

  • quit()

    Quits the driver and closes every associated window.Usage:driver.quit()

Android WebDriver

class selenium.webdriver.android.webdriver.WebDriver(host=’localhost’, port=4444, desired_capabilities={‘browserName’: ‘android’, ‘platform’: ‘ANDROID’, ‘version’: ‘’})

Bases: selenium.webdriver.remote.webdriver.WebDriver

Simple RemoteWebDriver wrapper to start connect to Selendroid’s WebView app

For more info on getting started with Selendroid http://selendroid.io/mobileWeb.html

__init__(host=’localhost’, port=4444, desired_capabilities={‘browserName’: ‘android’, ‘platform’: ‘ANDROID’, ‘version’: ‘’})

Creates a new instance of Selendroid using the WebView app

Args: host - location of where selendroid is running

port - port that selendroid is running on

desired_capabilities: Dictionary object with capabilities

Opera WebDriver

class selenium.webdriver.opera.webdriver.OperaDriver(executable_path=None, port=0, options=None, service_args=None, desired_capabilities=None, service_log_path=None, opera_options=None, keep_alive=True)

Bases: selenium.webdriver.chrome.webdriver.WebDriver

Controls the new OperaDriver and allows you to drive the Opera browser based on Chromium.

  • __init__(executable_path=None, port=0, options=None, service_args=None, desired_capabilities=None, service_log_path=None, opera_options=None, keep_alive=True)

    Creates a new instance of the operadriver.

    Starts the service and then creates new instance of operadriver.

    Args:executable_path - path to the executable. If the default is usedit assumes the executable is in the $PATH

    port - port you would like the service to run, if left as 0,a free port will be found.

    options: this takes an instance of OperaOptions

    service_args - List of args to pass to the driver service

    desired_capabilities: Dictionary object with non-browser specific

    service_log_path - Where to log information from the driver.

    opera_options - Deprecated argument for options capabilities only, such as “proxy” or “loggingPref”.

  • create_options()

class selenium.webdriver.opera.webdriver.WebDriver(desired_capabilities=None, executable_path=None, port=0, service_log_path=None, service_args=None, options=None)

Bases: selenium.webdriver.opera.webdriver.OperaDriver

  • class ServiceType

    Bases: object

    CHROMIUM = 2

    • executable_path - path to the executable. If the default is used

      it assumes the executable is in the $PATH

    • port - port you would like the service to run, if left as 0,

      a free port will be found.

    • options: this takes an instance of OperaOptions

    • service_args - List of args to pass to the driver service

    • desired_capabilities: Dictionary object with non-browser specific

    • service_log_path - Where to log information from the driver.

    • opera_options - Deprecated argument for options capabilities only, such as “proxy” or “loggingPref”.

PhantomJS WebDriver

class selenium.webdriver.phantomjs.webdriver.WebDriver(executable_path=’phantomjs’, port=0, desired_capabilities={‘browserName’: ‘phantomjs’, ‘javascriptEnabled’: True, ‘platform’: ‘ANY’, ‘version’: ‘’}, service_args=None, service_log_path=None)

Bases: selenium.webdriver.remote.webdriver.WebDriver

Wrapper to communicate with PhantomJS through Ghostdriver.

You will need to follow all the directions here: https://github.com/detro/ghostdriver

  • __init__(executable_path=’phantomjs’, port=0, desired_capabilities={‘browserName’: ‘phantomjs’, ‘javascriptEnabled’: True, ‘platform’: ‘ANY’, ‘version’: ‘’}, service_args=None, service_log_path=None)

    Creates a new instance of the PhantomJS / Ghostdriver.

    Starts the service and then creates new instance of the driver.

    Args:executable_path - path to the executable. If the default is used it assumes the executable is in the $PATH

    port - port you would like the service to run, if left as 0, a free port will be found.

    desired_capabilities: Dictionary object with non-browser specific capabilities only, such as “proxy” or “loggingPref”.

    service_args : A List of command line arguments to pass to PhantomJSservice_log_path: Path for phantomjs service to log to.

  • quit()

    Closes the browser and shuts down the PhantomJS executable that is started when starting the PhantomJS

PhantomJS WebDriver Service

class selenium.webdriver.phantomjs.service.Service(executable_path, port=0, service_args=None, log_path=None)

Bases: selenium.webdriver.common.service.Service

Object that manages the starting and stopping of PhantomJS / Ghostdriver

  • __init__(executable_path, port=0, service_args=None, log_path=None)

    Creates a new instance of the Service

    Args:executable_path : Path to PhantomJS binary

    port : Port the service is running on

    service_args : A List of other command line options to pass to PhantomJS

    log_path: Path for PhantomJS service to log to

  • command_line_args()

  • send_remote_shutdown_command()

  • service_url

    Gets the url of the GhostDriver Service

Safari WebDriver

class selenium.webdriver.safari.webdriver.WebDriver(port=0, executable_path=’/usr/bin/safaridriver’, reuse_service=False, desired_capabilities={‘browserName’: ‘safari’, ‘platform’: ‘MAC’, ‘version’: ‘’}, quiet=False, keep_alive=True, service_args=None)

Bases: selenium.webdriver.remote.webdriver.WebDriver

Controls the SafariDriver and allows you to drive the browser.

  • __init__(port=0, executable_path=’/usr/bin/safaridriver’, reuse_service=False, desired_capabilities={‘browserName’: ‘safari’, ‘platform’: ‘MAC’, ‘version’: ‘’}, quiet=False, keep_alive=True, service_args=None)

    Creates a new Safari driver instance and launches or finds a running safaridriver service.

    Args:port - The port on which the safaridriver service should listen for new connections. If zero, a free port will be found.

    executable_path - Path to a custom safaridriver executable to be used. If absent, /usr/bin/safaridriver is used.

    reuse_service - If True, do not spawn a safaridriver instance; instead, connect to an already-running service that was launched externally.

    desired_capabilities: Dictionary object with desired capabilities (Can be used to provide various Safari switches).

    quiet - If True, the driver’s stdout and stderr is suppressed.

    keep_alive - Whether to configure SafariRemoteConnection to useHTTP keep-alive. Defaults to False.

    service_args : List of args to pass to the safaridriver service

  • debug()

  • get_permission(permission)

  • quit()

    Closes the browser and shuts down the SafariDriver executable that is started when starting the SafariDriver

  • set_permission(permission, value)

Safari WebDriver Service

class selenium.webdriver.safari.service.Service(executable_path, port=0, quiet=False, service_args=None)

Bases: selenium.webdriver.common.service.Service

Object that manages the starting and stopping of the SafariDriver

  • __init__(executable_path, port=0, quiet=False, service_args=None)

    Creates a new instance of the Service

    Args:executable_path : Path to the SafariDriver

    port : Port the service is running on

    quiet : Suppress driver stdout and stderr

    service_args : List of args to pass to the safaridriver service

  • command_line_args()

  • service_url

    Gets the url of the SafariDriver Service

Select Support

  • class selenium.webdriver.support.select.Select(webelement)

    Bases: object

    • __init__(webelement)

      Constructor. A check is made that the given element is, indeed, a SELECT tag. If it is not, then an UnexpectedTagNameException is thrown.

      Args:webelement - element SELECT element to wrap

      Example:
      from selenium.webdriver.support.ui import Select
      Select(driver.find_element_by_tag_name(“select”)).select_by_index(2)

    • deselect_all()

      Clear all selected entries. This is only valid when the SELECT supports multiple selections. throws NotImplementedError If the SELECT does not support multiple selections

    • deselect_by_index(index)

      Deselect the option at the given index. This is done by examing the “index” attribute of an element, and not merely by counting.

      Args:index - The option at this index will be deselectedthrows NoSuchElementException If there is no option with specisied index in SELECT

    • deselect_by_value(value)

      Deselect all options that have a value matching the argument. That is, when given “foo” this would deselect an option like:

      Args:value - The value to match againstthrows NoSuchElementException If there is no option with specisied value in SELECT

    • deselect_by_visible_text(text)

      Deselect all options that display text matching the argument. That is, when given “Bar” this would deselect an option like:

      Args:text - The visible text to match against

    • select_by_index(index)

      Select the option at the given index. This is done by examing the “index” attribute of an element, and not merely by counting.

      Args:index - The option at this index will be selectedthrows NoSuchElementException If there is no option with specisied index in SELECT

    • select_by_value(value)

      Select all options that have a value matching the argument. That is, when given “foo” this would select an option like:

      Args:value - The value to match againstthrows NoSuchElementException If there is no option with specisied value in SELECT

    • select_by_visible_text(text)

      Select all options that display text matching the argument. That is, when given “Bar” this would select an option like:

      Args:text - The visible text to match againstthrows NoSuchElementException If there is no option with specisied text in SELECT

    • all_selected_options

      Returns a list of all selected options belonging to this select tag

    • first_selected_option

      The first selected option in this select tag (or the currently selected option in a normal select)

    • options

      Returns a list of all options belonging to this select tag

Wait Support

class selenium.webdriver.support.wait.WebDriverWait(driver, timeout, poll_frequency=0.5, ignored_exceptions=None)

Bases: object

  • __init__(driver, timeout, poll_frequency=0.5, ignored_exceptions=None)

    Constructor, takes a WebDriver instance and timeout in seconds.

    Args:driver - Instance of WebDriver (Ie, Firefox, Chrome or Remote)

    timeout - Number of seconds before timing out

    poll_frequency - sleep interval between calls By default, it is 0.5 second.

    ignored_exceptions - iterable structure of exception classes ignored during calls. By default, it contains NoSuchElementException only.

    Example:
    from selenium.webdriver.support.ui import WebDriver
    Waitelement = WebDriverWait(driver, 10).until(lambda x: x.find_element_by_id(“someId”))
    is_disappeared = WebDriverWait(driver, 30, 1, (ElementNotVisibleException)).until_not(lambda x: x.find_element_by_id(“someId”).is_displayed())

  • until(method, message=’’)

    Calls the method provided with the driver as an argument until the return value is not False.

  • until_not(method, message=’’)

    Calls the method provided with the driver as an argument until the return value is False.

Color Support

  • class selenium.webdriver.support.color.Color(red, green, blue, alpha=1)

    Bases: object

    Color conversion support class

    Example:

    from selenium.webdriver.support.color import Color
    
    print(Color.from_string('#00ff33').rgba)
    print(Color.from_string('rgb(1, 255, 3)').hex)
    print(Color.from_string('blue').rgba)
    • __init__(red, green, blue, alpha=1)

      Initialize self. See help(type(self)) for accurate signature.

    • static from_string(str_)

    • hex

    • rgb

    • rgba

Event Firing WebDriver Support

class selenium.webdriver.support.event_firing_webdriver.EventFiringWebDriver(driver, event_listener)

Bases: object

A wrapper around an arbitrary WebDriver instance which supports firing events

  • __init__(driver, event_listener)

    Creates a new instance of the EventFiringWebDriver

    Args:driver : A WebDriver instanceevent_listener : Instance of a class that subclasses AbstractEventListener and implements it fully or partially

    Example:

    from selenium.webdriver import Firefox
    from selenium.webdriver.support.events import EventFiringWebDriver, AbstractEventListener
    
    class MyListener(AbstractEventListener):
        def before_navigate_to(self, url, driver):
            print("Before navigate to %s" % url)
        def after_navigate_to(self, url, driver):
            print("After navigate to %s" % url)
    
    driver = Firefox()
    ef_driver = EventFiringWebDriver(driver, MyListener())
    ef_driver.get("http://www.google.co.in/")
  • back()

  • close()

  • execute_async_script(script, *args)

  • execute_script(script, *args)

  • find_element(by=’id’, value=None)

  • find_element_by_class_name(name)

  • find_element_by_css_selector(css_selector)

  • find_element_by_id(id_)

  • find_element_by_link_text(link_text)

  • find_element_by_name(name)

  • find_element_by_partial_link_text(link_text)

  • find_element_by_tag_name(name)

  • find_element_by_xpath(xpath)

  • find_elements(by=’id’, value=None)

  • find_elements_by_class_name(name)

  • find_elements_by_css_selector(css_selector)

  • find_elements_by_id(id_)

  • find_elements_by_link_text(text)

  • find_elements_by_name(name)

  • find_elements_by_partial_link_text(link_text)

  • find_elements_by_tag_name(name)

  • find_elements_by_xpath(xpath)

  • forward()

  • get(url)

  • quit()

  • wrapped_driver

    Returns the WebDriver instance wrapped by this EventsFiringWebDriver

class selenium.webdriver.support.event_firing_webdriver.EventFiringWebElement(webelement, ef_driver)

Bases: object

” A wrapper around WebElement instance which supports firing events

  • __init__(webelement, ef_driver)

    Creates a new instance of the EventFiringWebElement

  • clear()

  • click()

  • find_element(by=’id’, value=None)

  • find_element_by_class_name(name)

  • find_element_by_css_selector(css_selector)

  • find_element_by_id(id_)

  • find_element_by_link_text(link_text)

  • find_element_by_name(name)

  • find_element_by_partial_link_text(link_text)

  • find_element_by_tag_name(name)

  • find_element_by_xpath(xpath)

  • find_elements(by=’id’, value=None)

  • find_elements_by_class_name(name)

  • find_elements_by_css_selector(css_selector)

  • find_elements_by_id(id_)

  • find_elements_by_link_text(link_text)

  • find_elements_by_name(name)

  • find_elements_by_partial_link_text(link_text)

  • find_elements_by_tag_name(name)

  • find_elements_by_xpath(xpath)

  • send_keys(*value)

  • wrapped_element

    Returns the WebElement wrapped by this EventFiringWebElement instance

Abstract Event Listener Support

class selenium.webdriver.support.abstract_event_listener.AbstractEventListener

Bases: object

Event listener must subclass and implement this fully or partially

  • after_change_value_of(element, driver)

  • after_click(element, driver)

  • after_close(driver)

  • after_execute_script(script, driver)

  • after_find(by, value, driver)

  • after_navigate_back(driver)

  • after_navigate_forward(driver)

  • after_navigate_to(url, driver)

  • after_quit(driver)

  • before_change_value_of(element, driver)

  • before_click(element, driver)

  • before_close(driver)

  • before_execute_script(script, driver)

  • before_find(by, value, driver)

  • before_navigate_back(driver)

  • before_navigate_forward(driver)

  • before_navigate_to(url, driver)

  • before_quit(driver)

  • on_exception(exception, driver)

Expected conditions Support

  • class selenium.webdriver.support.expected_conditions.alert_is_present

    Bases: objectExpect an alert to be present.

    __init__()

    Initialize self. See help(type(self)) for accurate signature.

  • class selenium.webdriver.support.expected_conditions.element_located_selection_state_to_be(locator, is_selected)

    Bases: object

    An expectation to locate an element and check if the selection state specified is in that state. locator is a tuple of (by, path) is_selected is a boolean

    __init__(locator, is_selected)

    Initialize self. See help(type(self)) for accurate signature.

  • class selenium.webdriver.support.expected_conditions.element_located_to_be_selected(locator)

    Bases: object

    An expectation for the element to be located is selected. locator is a tuple of (by, path)

    __init__(locator)

    Initialize self. See help(type(self)) for accurate signature.

  • class selenium.webdriver.support.expected_conditions.element_selection_state_to_be(element, is_selected)

    Bases: object

    An expectation for checking if the given element is selected. element is WebElement object is_selected is a Boolean.”

    __init__(element, is_selected)

    Initialize self. See help(type(self)) for accurate signature.

  • class selenium.webdriver.support.expected_conditions.element_to_be_clickable(locator)

    Bases: object

    An Expectation for checking an element is visible and enabled such that you can click it.

    __init__(locator)

    Initialize self. See help(type(self)) for accurate signature.

  • class selenium.webdriver.support.expected_conditions.element_to_be_selected(element)

    Bases: object

    An expectation for checking the selection is selected. element is WebElement object

    __init__(element)

    Initialize self. See help(type(self)) for accurate signature.

  • class selenium.webdriver.support.expected_conditions.frame_to_be_available_and_switch_to_it(locator)

    Bases: object

    An expectation for checking whether the given frame is available to switch to. If the frame is available it switches the given driver to the specified frame.

    __init__(locator)

    Initialize self. See help(type(self)) for accurate signature.

  • class selenium.webdriver.support.expected_conditions.invisibility_of_element(locator)

    Bases: selenium.webdriver.support.expected_conditions.invisibility_of_element_located

    An Expectation for checking that an element is either invisible or not present on the DOM.element is either a locator (text) or an WebElement

  • class selenium.webdriver.support.expected_conditions.invisibility_of_element_located(locator)

    Bases: object

    An Expectation for checking that an element is either invisible or not present on the DOM.locator used to find the element

    __init__(locator)

    Initialize self. See help(type(self)) for accurate signature.

  • class selenium.webdriver.support.expected_conditions.new_window_is_opened(current_handles)

    Bases: object

    An expectation that a new window will be opened and have the number of windows handles increase

    __init__(current_handles)

    Initialize self. See help(type(self)) for accurate signature.

  • class selenium.webdriver.support.expected_conditions.number_of_windows_to_be(num_windows)

    Bases: object

    An expectation for the number of windows to be a certain value.

    __init__(num_windows)

    Initialize self. See help(type(self)) for accurate signature.

  • class selenium.webdriver.support.expected_conditions.presence_of_all_elements_located(locator)

    Bases: object

    An expectation for checking that there is at least one element present on a web page. locator is used to find the element returns the list of WebElements once they are located

    __init__(locator)

    Initialize self. See help(type(self)) for accurate signature.

  • class selenium.webdriver.support.expected_conditions.presence_of_element_located(locator)

    Bases: object

    An expectation for checking that an element is present on the DOM of a page. This does not necessarily mean that the element is visible. locator - used to find the element returns the WebElement once it is located

    __init__(locator)

    Initialize self. See help(type(self)) for accurate signature.

  • class selenium.webdriver.support.expected_conditions.staleness_of(element)

    Bases: object

    Wait until an element is no longer attached to the DOM. element is the element to wait for. returns False if the element is still attached to the DOM, true otherwise.

    __init__(element)

    Initialize self. See help(type(self)) for accurate signature.

  • class selenium.webdriver.support.expected_conditions.text_to_be_present_in_element(locator, text_)

    Bases: object

    An expectation for checking if the given text is present in the specified element. locator, text

    __init__(locator, text_)

    Initialize self. See help(type(self)) for accurate signature.

  • class selenium.webdriver.support.expected_conditions.text_to_be_present_in_element_value(locator, text_)

    Bases: object

    An expectation for checking if the given text is present in the element’s locator, text

    __init__(locator, text_)

    Initialize self. See help(type(self)) for accurate signature.

  • class selenium.webdriver.support.expected_conditions.title_contains(title)

    Bases: object

    An expectation for checking that the title contains a case-sensitive substring. title is the fragment of title expected returns True when the title matches, False otherwise

    __init__(title)

    Initialize self. See help(type(self)) for accurate signature.

  • class selenium.webdriver.support.expected_conditions.title_is(title)

    Bases: object

    An expectation for checking the title of a page. title is the expected title, which must be an exact match returns True if the title matches, false otherwise.

    __init__(title)

    Initialize self. See help(type(self)) for accurate signature.

  • class selenium.webdriver.support.expected_conditions.url_changes(url)

    Bases: object

    An expectation for checking the current url. url is the expected url, which must not be an exact match returns True if the url is different, false otherwise.

    __init__(url)

    Initialize self. See help(type(self)) for accurate signature.

  • class selenium.webdriver.support.expected_conditions.url_contains(url)

    Bases: object

    An expectation for checking that the current url contains a case-sensitive substring. url is the fragment of url expected, returns True when the url matches, False otherwise

    __init__(url)

    Initialize self. See help(type(self)) for accurate signature.

  • class selenium.webdriver.support.expected_conditions.url_matches(pattern)

    Bases: object

    An expectation for checking the current url. pattern is the expected pattern, which must be an exact match returns True if the url matches, false otherwise.

    __init__(pattern)

    Initialize self. See help(type(self)) for accurate signature.

  • class selenium.webdriver.support.expected_conditions.url_to_be(url)

    Bases: object

    An expectation for checking the current url. url is the expected url, which must be an exact match returns True if the url matches, false otherwise.

    __init__(url)

    Initialize self. See help(type(self)) for accurate signature.

  • class selenium.webdriver.support.expected_conditions.visibility_of(element)

    Bases: object

    An expectation for checking that an element, known to be present on the DOM of a page, is visible. Visibility means that the element is not only displayed but also has a height and width that is greater than 0. element is the WebElement returns the (same) WebElement once it is visible

    __init__(element)

    Initialize self. See help(type(self)) for accurate signature.

  • class selenium.webdriver.support.expected_conditions.visibility_of_all_elements_located(locator)

    Bases: object

    An expectation for checking that all elements are present on the DOM of a page and visible. Visibility means that the elements are not only displayed but also has a height and width that is greater than 0. locator - used to find the elements returns the list of WebElements once they are located and visible

    __init__(locator)

    Initialize self. See help(type(self)) for accurate signature.

  • class selenium.webdriver.support.expected_conditions.visibility_of_any_elements_located(locator)

    Bases: object

    An expectation for checking that there is at least one element visible on a web page. locator is used to find the element returns the list of WebElements once they are located

    __init__(locator)

    Initialize self. See help(type(self)) for accurate signature.

  • class selenium.webdriver.support.expected_conditions.visibility_of_element_located(locator)

    Bases: object

    An expectation for checking that an element is present on the DOM of a page and visible. Visibility means that the element is not only displayed but also has a height and width that is greater than 0. locator - used to find the element returns the WebElement once it is located and visible

    __init__(locator)

    Initialize self. See help(type(self)) for accurate signature.

附录:常见问题

如何使用 ChromeDriver ?

下载最新版本的 chromedriver. 解压缩这个文件:

unzip chromedriver_linux32_x.x.x.x.zip

你应该会看到一个 chromedriver 的可执行文件. 现在你可以像这样创建一个 Chrome WebDriver 实例:

driver = webdriver.Chrome(executable_path="/path/to/chromedriver")

这个示例的其余部分应该在其他的文档中给出。

Selenium 2是否支持XPath 2.0版本?

参考:http://seleniumhq.org/docs/03_webdriver.html#how-xpath-works-in-webdriver

Selenium代表的XPath查询基于浏览器自身的XPath引擎,所以Selenium支持任何 支持XPath的浏览器。在不具备原生的XPath引擎(IE6,7,8)的浏览器,Selenium只支持XPath 1.0。

如何向下滚动到页面的底部?

参考: http://blog.varunin.com/2011/08/scrolling-on-pages-using-selenium.html

你可以在加载完成的页面上使用 execute_script 方法执行js。所以, 你调用javascript API滚动到底部或页面的任何位置。

这里是一个滚动到页面底部的例子:

driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")

window 对象在DOM有一个 scrollTo 滚动到打开窗口 的任意位置的方法。 该 scrollHeight 是所有元素的共同属性。 该 document.body.scrollHeight 将给出整个页面体的高度。

如何使用自定义的Firefox 配置文件保存文件?

参考: http://stackoverflow.com/questions/1176348/access-to-file-download-dialog-in-firefox

参考: http://blog.codecentric.de/en/2010/07/file-downloads-with-selenium-mission-impossible/

第一步是要确认自动保存文件的类型。

要确定你想要自动下载的内容类型,你可使用 curl:

curl -I URL | grep "Content-Type"

找到内容类型的另一种方法是使用 [](https://selenium-python-zh.readthedocs.io/en/latest/faq.html#id4)requests <[http://python-requests.org](http://python-requests.org/)>模块, 你可以像这样使用:

import requests
content_type = requests.head('http://www.python.org').headers['content-type']
print(content_type)

一旦内容类型被确认,你可以用它来设置firefox配置文件的偏好: browser.helperApps.neverAsk.saveToDisk 下面是一个例子:

import os

from selenium import webdriver

fp = webdriver.FirefoxProfile()

fp.set_preference("browser.download.folderList",2)
fp.set_preference("browser.download.manager.showWhenStarting",False)
fp.set_preference("browser.download.dir", os.getcwd())
fp.set_preference("browser.helperApps.neverAsk.saveToDisk", "application/octet-stream")

browser = webdriver.Firefox(firefox_profile=fp)
browser.get("http://pypi.python.org/pypi/selenium")
browser.find_element_by_partial_link_text("selenium-2").click()

在上面的例子中,application/octet-stream 被当作内容类型。

browser.download.dir 选项指定了你要下载文件的目录。

如果上传文件到文件上传控件?

选择 <input type="file"> 元素并且调用 send_keys() 方法传入要上传文件的路径,可以 是对于测试脚本的相对路径,也可以是绝对路径。 请牢记在Windows和Unix系统之间的路径名的区别。

如果在Firefox中使用firebug工具?

首先下载Firebug插件的XPI文件, 然后调用对于firefox 的配置提供的 add_extension 方法

from selenium import webdriver

fp = webdriver.FirefoxProfile()

fp.add_extension(extension=’firebug-1.8.4.xpi’) fp.set_preference(“extensions.firebug.currentVersion”,1.8.4) #Avoid startup screen browser = webdriver.Firefox(firefox_profile=fp)

如果获取当前窗口的截图?

使用webdriver提供的 save_screenshot 方法:

from selenium import webdriver

driver = webdriver.Firefox()
driver.get('http://www.python.org/')
driver.save_screenshot('screenshot.png')
driver.quit()

爬虫进阶:Beautiful Soup 4.4.0

Beautiful Soup 是一个可以从HTML或XML文件中提取数据的Python库.它能够通过你喜欢的转换器实现惯用的文档导航,查找,修改文档的方式.Beautiful Soup会帮你节省数小时甚至数天的工作时间.

这篇文档介绍了BeautifulSoup4中所有主要特性,并且有小例子.让我来向你展示它适合做什么,如何工作,怎样使用,如何达到你想要的效果,和处理异常情况.

文档中出现的例子在Python2.7和Python3.2中的执行结果相同

你可能在寻找 Beautiful Soup3 的文档,Beautiful Soup 3 目前已经停止开发,我们推荐在现在的项目中使用Beautiful Soup 4, 移植到BS4

快速开始

下面的一段HTML代码将作为例子被多次用到.这是 爱丽丝梦游仙境的 的一段内容(以后内容中简称为 爱丽丝 的文档):

html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>

<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>

<p class="story">...</p>
"""

使用BeautifulSoup解析这段代码,能够得到一个 BeautifulSoup 的对象,并能按照标准的缩进格式的结构输出:

from bs4 import BeautifulSoup
soup = BeautifulSoup(html_doc, 'html.parser')

print(soup.prettify())
# <html>
#  <head>
#   <title>
#    The Dormouse's story
#   </title>
#  </head>
#  <body>
#   <p class="title">
#    <b>
#     The Dormouse's story
#    </b>
#   </p>
#   <p class="story">
#    Once upon a time there were three little sisters; and their names were
#    <a class="sister" href="http://example.com/elsie" id="link1">
#     Elsie
#    </a>
#    ,
#    <a class="sister" href="http://example.com/lacie" id="link2">
#     Lacie
#    </a>
#    and
#    <a class="sister" href="http://example.com/tillie" id="link2">
#     Tillie
#    </a>
#    ; and they lived at the bottom of a well.
#   </p>
#   <p class="story">
#    ...
#   </p>
#  </body>
# </html>

几个简单的浏览结构化数据的方法:

soup.title
# <title>The Dormouse's story</title>

soup.title.name
# u'title'

soup.title.string
# u'The Dormouse's story'

soup.title.parent.name
# u'head'

soup.p
# <p class="title"><b>The Dormouse's story</b></p>

soup.p['class']
# u'title'

soup.a
# <a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>

soup.find_all('a')
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
#  <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
#  <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

soup.find(id="link3")
# <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>

从文档中找到所有<a>标签的链接:

for link in soup.find_all('a'):
    print(link.get('href'))
    # http://example.com/elsie
    # http://example.com/lacie
    # http://example.com/tillie

从文档中获取所有文字内容:

print(soup.get_text())
# The Dormouse's story
#
# The Dormouse's story
#
# Once upon a time there were three little sisters; and their names were
# Elsie,
# Lacie and
# Tillie;
# and they lived at the bottom of a well.
#
# ...

安装 Beautiful Soup

如果你用的是新版的Debain或ubuntu,那么可以通过系统的软件包管理来安装:

$ apt-get install Python-bs4

Beautiful Soup 4 通过PyPi发布,所以如果你无法使用系统包管理安装,那么也可以通过 easy_installpip 来安装.包的名字是 beautifulsoup4 ,这个包兼容Python2和Python3.

$ easy_install beautifulsoup4
$ pip install beautifulsoup4

(在PyPi中还有一个名字是 BeautifulSoup 的包,但那可能不是你想要的,那是 Beautiful Soup3 的发布版本,因为很多项目还在使用BS3, 所以 BeautifulSoup 包依然有效.但是如果你在编写新项目,那么你应该安装的 beautifulsoup4 )

如果你没有安装 easy_installpip ,那你也可以 下载BS4的源码 ,然后通过setup.py来安装.

$ Python setup.py install

如果上述安装方法都行不通,Beautiful Soup的发布协议允许你将BS4的代码打包在你的项目中,这样无须安装即可使用.

在Python2.7和Python3.2的版本下开发Beautiful Soup, 理论上Beautiful Soup应该在所有当前的Python版本中正常工作

安装完成后的问题

Beautiful Soup发布时打包成Python2版本的代码,在Python3环境下安装时,会自动转换成Python3的代码,如果没有一个安装的过程,那么代码就不会被转换.

如果代码抛出了 ImportError 的异常: “No module named HTMLParser”, 这是因为你在Python3版本中执行Python2版本的代码.

如果代码抛出了 ImportError 的异常: “No module named html.parser”, 这是因为你在Python2版本中执行Python3版本的代码.

如果遇到上述2种情况,最好的解决方法是重新安装BeautifulSoup4.

如果在ROOT_TAG_NAME = u’[document]’代码处遇到 SyntaxError “Invalid syntax”错误,需要将把BS4的Python代码版本从Python2转换到Python3. 可以重新安装BS4:

$ Python3 setup.py install

或在bs4的目录中执行Python代码版本转换脚本

$ 2to3-3.2 -w bs4

安装解析器

Beautiful Soup支持Python标准库中的HTML解析器,还支持一些第三方的解析器,其中一个是 lxml .根据操作系统不同,可以选择下列方法来安装lxml:

$ apt-get install Python-lxml
$ easy_install lxml
$ pip install lxml

另一个可供选择的解析器是纯Python实现的 html5lib , html5lib的解析方式与浏览器相同,可以选择下列方法来安装html5lib:

$ apt-get install Python-html5lib
$ easy_install html5lib
$ pip install html5lib

下表列出了主要的解析器,以及它们的优缺点:

解析器 使用方法 优势 劣势
Python标准库 BeautifulSoup(markup, "html.parser") Python的内置标准库执行速度适中文档容错能力强 Python 2.7.3 or 3.2.2)前 的版本中文档容错能力差
lxml HTML 解析器 BeautifulSoup(markup, "lxml") 速度快文档容错能力强 需要安装C语言库
lxml XML 解析器 BeautifulSoup(markup, ["lxml-xml"])``BeautifulSoup(markup, "xml") 速度快唯一支持XML的解析器 需要安装C语言库
html5lib BeautifulSoup(markup, "html5lib") 最好的容错性以浏览器的方式解析文档生成HTML5格式的文档 速度慢不依赖外部扩展

推荐使用lxml作为解析器,因为效率更高. 在Python2.7.3之前的版本和Python3中3.2.2之前的版本,必须安装lxml或html5lib, 因为那些Python版本的标准库中内置的HTML解析方法不够稳定.

提示: 如果一段HTML或XML文档格式不正确的话,那么在不同的解析器中返回的结果可能是不一样的,查看 解析器之间的区别 了解更多细节

如何使用

将一段文档传入BeautifulSoup 的构造方法,就能得到一个文档的对象, 可以传入一段字符串或一个文件句柄.

from bs4 import BeautifulSoup

soup = BeautifulSoup(open("index.html"))

soup = BeautifulSoup("<html>data</html>")

首先,文档被转换成Unicode,并且HTML的实例都被转换成Unicode编码

BeautifulSoup("Sacr&eacute; bleu!")
<html><head></head><body>Sacré bleu!</body></html>

然后,Beautiful Soup选择最合适的解析器来解析这段文档,如果手动指定解析器那么Beautiful Soup会选择指定的解析器来解析文档.(参考 解析成XML ).

对象的种类

Beautiful Soup将复杂HTML文档转换成一个复杂的树形结构,每个节点都是Python对象,所有对象可以归纳为4种: Tag , NavigableString , BeautifulSoup , Comment .

Tag

Tag 对象与XML或HTML原生文档中的tag相同:

soup = BeautifulSoup('<b class="boldest">Extremely bold</b>')
tag = soup.b
type(tag)
# <class 'bs4.element.Tag'>

Tag有很多方法和属性,在 遍历文档树 和 搜索文档树 中有详细解释.现在介绍一下tag中最重要的属性: name和attributes

Name

每个tag都有自己的名字,通过 .name 来获取:

tag.name
# u'b'

如果改变了tag的name,那将影响所有通过当前Beautiful Soup对象生成的HTML文档:

tag.name = "blockquote"
tag
# <blockquote class="boldest">Extremely bold</blockquote>

Attributes

一个tag可能有很多个属性. tag <b class="boldest"> 有一个 “class” 的属性,值为 “boldest” . tag的属性的操作方法与字典相同:

tag['class']
# u'boldest'

也可以直接”点”取属性, 比如: .attrs :

tag.attrs
# {u'class': u'boldest'}

tag的属性可以被添加,删除或修改. 再说一次, tag的属性操作方法与字典一样

tag['class'] = 'verybold'
tag['id'] = 1
tag
# <blockquote class="verybold" id="1">Extremely bold</blockquote>

del tag['class']
del tag['id']
tag
# <blockquote>Extremely bold</blockquote>

tag['class']
# KeyError: 'class'
print(tag.get('class'))
# None

多值属性

HTML 4定义了一系列可以包含多个值的属性.在HTML5中移除了一些,却增加更多.最常见的多值的属性是 class (一个tag可以有多个CSS的class). 还有一些属性 rel , rev , accept-charset , headers , accesskey . 在Beautiful Soup中多值属性的返回类型是list:

css_soup = BeautifulSoup('<p class="body strikeout"></p>')
css_soup.p['class']
# ["body", "strikeout"]

css_soup = BeautifulSoup('<p class="body"></p>')
css_soup.p['class']
# ["body"]

如果某个属性看起来好像有多个值,但在任何版本的HTML定义中都没有被定义为多值属性,那么Beautiful Soup会将这个属性作为字符串返回

id_soup = BeautifulSoup('<p id="my id"></p>')
id_soup.p['id']
# 'my id'

将tag转换成字符串时,多值属性会合并为一个值

rel_soup = BeautifulSoup('<p>Back to the <a rel="index">homepage</a></p>')
rel_soup.a['rel']
# ['index']
rel_soup.a['rel'] = ['index', 'contents']
print(rel_soup.p)
# <p>Back to the <a rel="index contents">homepage</a></p>

如果转换的文档是XML格式,那么tag中不包含多值属性

xml_soup = BeautifulSoup('<p class="body strikeout"></p>', 'xml')
xml_soup.p['class']
# u'body strikeout'

可以遍历的字符串

字符串常被包含在tag内.Beautiful Soup用 NavigableString 类来包装tag中的字符串:

tag.string
# u'Extremely bold'
type(tag.string)
# <class 'bs4.element.NavigableString'>

一个 NavigableString 字符串与Python中的Unicode字符串相同,并且还支持包含在 遍历文档树 和 搜索文档树 中的一些特性. 通过 unicode() 方法可以直接将 NavigableString 对象转换成Unicode字符串:

unicode_string = unicode(tag.string)
unicode_string
# u'Extremely bold'
type(unicode_string)
# <type 'unicode'>

tag中包含的字符串不能编辑,但是可以被替换成其它的字符串,用 replace_with() 方法:

tag.string.replace_with("No longer bold")
tag
# <blockquote>No longer bold</blockquote>

NavigableString 对象支持 遍历文档树 和 搜索文档树 中定义的大部分属性, 并非全部.尤其是,一个字符串不能包含其它内容(tag能够包含字符串或是其它tag),字符串不支持 .contents.string 属性或 find() 方法.

如果想在Beautiful Soup之外使用 NavigableString 对象,需要调用 unicode() 方法,将该对象转换成普通的Unicode字符串,否则就算Beautiful Soup已方法已经执行结束,该对象的输出也会带有对象的引用地址.这样会浪费内存.

BeautifulSoup

BeautifulSoup 对象表示的是一个文档的全部内容.大部分时候,可以把它当作 Tag 对象,它支持 遍历文档树 和 搜索文档树 中描述的大部分的方法.

因为 BeautifulSoup 对象并不是真正的HTML或XML的tag,所以它没有name和attribute属性.但有时查看它的 .name 属性是很方便的,所以 BeautifulSoup 对象包含了一个值为 “[document]” 的特殊属性 .name

soup.name
# u'[document]'

注释及特殊字符串

Tag , NavigableString , BeautifulSoup 几乎覆盖了html和xml中的所有内容,但是还有一些特殊对象.容易让人担心的内容是文档的注释部分:

markup = "<b><!--Hey, buddy. Want to buy a used parser?--></b>"
soup = BeautifulSoup(markup)
comment = soup.b.string
type(comment)
# <class 'bs4.element.Comment'>

Comment 对象是一个特殊类型的 NavigableString 对象:

comment
# u'Hey, buddy. Want to buy a used parser'

但是当它出现在HTML文档中时, Comment 对象会使用特殊的格式输出:

print(soup.b.prettify())
# <b>
#  <!--Hey, buddy. Want to buy a used parser?-->
# </b>

Beautiful Soup中定义的其它类型都可能会出现在XML的文档中: CData , ProcessingInstruction , Declaration , Doctype .与 Comment 对象类似,这些类都是 NavigableString 的子类,只是添加了一些额外的方法的字符串独享.下面是用CDATA来替代注释的例子:

from bs4 import CData
cdata = CData("A CDATA block")
comment.replace_with(cdata)

print(soup.b.prettify())
# <b>
#  <![CDATA[A CDATA block]]>
# </b>

遍历文档树

还拿”爱丽丝梦游仙境”的文档来做例子:

html_doc = """
<html><head><title>The Dormouse's story</title></head>
    <body>
<p class="title"><b>The Dormouse's story</b></p>

<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>

<p class="story">...</p>
"""

from bs4 import BeautifulSoup
soup = BeautifulSoup(html_doc, 'html.parser')

通过这段例子来演示怎样从文档的一段内容找到另一段内容

子节点

一个Tag可能包含多个字符串或其它的Tag,这些都是这个Tag的子节点.Beautiful Soup提供了许多操作和遍历子节点的属性.

注意: Beautiful Soup中字符串节点不支持这些属性,因为字符串没有子节点

tag的名字

操作文档树最简单的方法就是告诉它你想获取的tag的name.如果想获取 <head> 标签,只要用 soup.head :

soup.head
# <head><title>The Dormouse's story</title></head>

soup.title
# <title>The Dormouse's story</title>

这是个获取tag的小窍门,可以在文档树的tag中多次调用这个方法.下面的代码可以获取<body>标签中的第一个<b>标签:

soup.body.b
# <b>The Dormouse's story</b>

通过点取属性的方式只能获得当前名字的第一个tag:

soup.a
# <a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>

如果想要得到所有的<a>标签,或是通过名字得到比一个tag更多的内容的时候,就需要用到 Searching the tree 中描述的方法,比如: find_all()

soup.find_all('a')
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
#  <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
#  <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

.contents 和 .children

tag的 .contents 属性可以将tag的子节点以列表的方式输出:

head_tag = soup.head
head_tag
# <head><title>The Dormouse's story</title></head>

head_tag.contents
[<title>The Dormouse's story</title>]

title_tag = head_tag.contents[0]
title_tag
# <title>The Dormouse's story</title>
title_tag.contents
# [u'The Dormouse's story']

BeautifulSoup 对象本身一定会包含子节点,也就是说<html>标签也是 BeautifulSoup 对象的子节点:

len(soup.contents)
# 1
soup.contents[0].name
# u'html'

字符串没有 .contents 属性,因为字符串没有子节点:

text = title_tag.contents[0]
text.contents
# AttributeError: 'NavigableString' object has no attribute 'contents'

通过tag的 .children 生成器,可以对tag的子节点进行循环:

for child in title_tag.children:
    print(child)
    # The Dormouse's story

.descendants

.contents.children 属性仅包含tag的直接子节点.例如,<head>标签只有一个直接子节点<title>

head_tag.contents
# [<title>The Dormouse's story</title>]

但是<title>标签也包含一个子节点:字符串 “The Dormouse’s story”,这种情况下字符串 “The Dormouse’s story”也属于<head>标签的子孙节点. .descendants 属性可以对所有tag的子孙节点进行递归循环:

for child in head_tag.descendants:
    print(child)
    # <title>The Dormouse's story</title>
    # The Dormouse's story

上面的例子中, <head>标签只有一个子节点,但是有2个子孙节点:<head>节点和<head>的子节点, BeautifulSoup 有一个直接子节点(<html>节点),却有很多子孙节点:

len(list(soup.children))
# 1
len(list(soup.descendants))
# 25

.string

如果tag只有一个 NavigableString 类型子节点,那么这个tag可以使用 .string 得到子节点:

title_tag.string
# u'The Dormouse's story'

如果一个tag仅有一个子节点,那么这个tag也可以使用 .string 方法,输出结果与当前唯一子节点的 .string 结果相同:

head_tag.contents
# [<title>The Dormouse's story</title>]

head_tag.string
# u'The Dormouse's story'

如果tag包含了多个子节点,tag就无法确定 .string 方法应该调用哪个子节点的内容, .string 的输出结果是 None :

print(soup.html.string)
# None

.strings 和 stripped_strings

如果tag中包含多个字符串,可以使用 .strings 来循环获取:

for string in soup.strings:
    print(repr(string))
    # u"The Dormouse's story"
    # u'\n\n'
    # u"The Dormouse's story"
    # u'\n\n'
    # u'Once upon a time there were three little sisters; and their names were\n'
    # u'Elsie'
    # u',\n'
    # u'Lacie'
    # u' and\n'
    # u'Tillie'
    # u';\nand they lived at the bottom of a well.'
    # u'\n\n'
    # u'...'
    # u'\n'

输出的字符串中可能包含了很多空格或空行,使用 .stripped_strings 可以去除多余空白内容:

for string in soup.stripped_strings:
    print(repr(string))
    # u"The Dormouse's story"
    # u"The Dormouse's story"
    # u'Once upon a time there were three little sisters; and their names were'
    # u'Elsie'
    # u','
    # u'Lacie'
    # u'and'
    # u'Tillie'
    # u';\nand they lived at the bottom of a well.'
    # u'...'

全部是空格的行会被忽略掉,段首和段末的空白会被删除

父节点

继续分析文档树,每个tag或字符串都有父节点:被包含在某个tag中

.parent

通过 .parent 属性来获取某个元素的父节点.在例子“爱丽丝”的文档中,<head>标签是<title>标签的父节点:

title_tag = soup.title
title_tag
# <title>The Dormouse's story</title>
title_tag.parent
# <head><title>The Dormouse's story</title></head>

文档title的字符串也有父节点:<title>标签

title_tag.string.parent
# <title>The Dormouse's story</title>

文档的顶层节点比如<html>的父节点是 BeautifulSoup 对象:

html_tag = soup.html
type(html_tag.parent)
# <class 'bs4.BeautifulSoup'>

BeautifulSoup 对象的 .parent 是None:

print(soup.parent)
# None

.parents

通过元素的 .parents 属性可以递归得到元素的所有父辈节点,下面的例子使用了 .parents 方法遍历了<a>标签到根节点的所有节点.

link = soup.a
link
# <a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>
for parent in link.parents:
    if parent is None:
        print(parent)
    else:
        print(parent.name)
# p
# body
# html
# [document]
# None

兄弟节点

看一段简单的例子:

sibling_soup = BeautifulSoup("<a><b>text1</b><c>text2</c></b></a>")
print(sibling_soup.prettify())
# <html>
#  <body>
#   <a>
#    <b>
#     text1
#    </b>
#    <c>
#     text2
#    </c>
#   </a>
#  </body>
# </html>

因为标签和标签是同一层:他们是同一个元素的子节点,所以可以被称为兄弟节点.一段文档以标准格式输出时,兄弟节点有相同的缩进级别.在代码中也可以使用这种关系.

.next_sibling 和 .previous_sibling

在文档树中,使用 .next_sibling.previous_sibling 属性来查询兄弟节点:

sibling_soup.b.next_sibling
# <c>text2</c>

sibling_soup.c.previous_sibling
# <b>text1</b>

标签有 .next_sibling 属性,但是没有 .previous_sibling 属性,因为标签在同级节点中是第一个.同理,标签有 .previous_sibling 属性,却没有 .next_sibling 属性:

print(sibling_soup.b.previous_sibling)
# None
print(sibling_soup.c.next_sibling)
# None

例子中的字符串“text1”和“text2”不是兄弟节点,因为它们的父节点不同:

sibling_soup.b.string
# u'text1'

print(sibling_soup.b.string.next_sibling)
# None

实际文档中的tag的 .next_sibling.previous_sibling 属性通常是字符串或空白. 看看“爱丽丝”文档:

<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a>
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>

如果以为第一个<a>标签的 .next_sibling 结果是第二个<a>标签,那就错了,真实结果是第一个<a>标签和第二个<a>标签之间的顿号和换行符:

link = soup.a
link
# <a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>

link.next_sibling
# u',\n'

第二个<a>标签是顿号的 .next_sibling 属性:

link.next_sibling.next_sibling
# <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>

.next_siblings 和 .previous_siblings

通过 .next_siblings.previous_siblings 属性可以对当前节点的兄弟节点迭代输出:

for sibling in soup.a.next_siblings:
    print(repr(sibling))
    # u',\n'
    # <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>
    # u' and\n'
    # <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>
    # u'; and they lived at the bottom of a well.'
    # None

for sibling in soup.find(id="link3").previous_siblings:
    print(repr(sibling))
    # ' and\n'
    # <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>
    # u',\n'
    # <a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>
    # u'Once upon a time there were three little sisters; and their names were\n'
    # None

回退和前进

看一下“爱丽丝” 文档:

<html><head><title>The Dormouse's story</title></head>
<p class="title"><b>The Dormouse's story</b></p>

HTML解析器把这段字符串转换成一连串的事件: “打开<html>标签”,”打开一个<head>标签”,”打开一个<title>标签”,”添加一段字符串”,”关闭<title>标签”,”打开<p>标签”,等等.Beautiful Soup提供了重现解析器初始化过程的方法.

.next_element 和 .previous_element

.next_element 属性指向解析过程中下一个被解析的对象(字符串或tag),结果可能与 .next_sibling 相同,但通常是不一样的.

这是“爱丽丝”文档中最后一个<a>标签,它的 .next_sibling 结果是一个字符串,因为当前的解析过程 因为当前的解析过程因为遇到了<a>标签而中断了:

last_a_tag = soup.find("a", id="link3")
last_a_tag
# <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>

last_a_tag.next_sibling
# '; and they lived at the bottom of a well.'

但这个<a>标签的 .next_element 属性结果是在<a>标签被解析之后的解析内容,不是<a>标签后的句子部分,应该是字符串”Tillie”:

last_a_tag.next_element
# u'Tillie'

这是因为在原始文档中,字符串“Tillie” 在分号前出现,解析器先进入<a>标签,然后是字符串“Tillie”,然后关闭</a>标签,然后是分号和剩余部分.分号与<a>标签在同一层级,但是字符串“Tillie”会被先解析.

.previous_element 属性刚好与 .next_element 相反,它指向当前被解析的对象的前一个解析对象:

last_a_tag.previous_element
# u' and\n'
last_a_tag.previous_element.next_element
# <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>

.next_elements 和 .previous_elements

通过 .next_elements.previous_elements 的迭代器就可以向前或向后访问文档的解析内容,就好像文档正在被解析一样:

for element in last_a_tag.next_elements:
    print(repr(element))
# u'Tillie'
# u';\nand they lived at the bottom of a well.'
# u'\n\n'
# <p class="story">...</p>
# u'...'
# u'\n'
# None

搜索文档树

Beautiful Soup定义了很多搜索方法,这里着重介绍2个: find()find_all() .其它方法的参数和用法类似,请读者举一反三.

再以“爱丽丝”文档作为例子:

html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>

<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>

<p class="story">...</p>
"""

from bs4 import BeautifulSoup
soup = BeautifulSoup(html_doc, 'html.parser')

使用 find_all() 类似的方法可以查找到想要查找的文档内容

过滤器

介绍 find_all() 方法前,先介绍一下过滤器的类型,这些过滤器贯穿整个搜索的API.过滤器可以被用在tag的name中,节点的属性中,字符串中或他们的混合中.

字符串

最简单的过滤器是字符串.在搜索方法中传入一个字符串参数,Beautiful Soup会查找与字符串完整匹配的内容,下面的例子用于查找文档中所有的<b>标签:

soup.find_all('b')
# [<b>The Dormouse's story</b>]

如果传入字节码参数,Beautiful Soup会当作UTF-8编码,可以传入一段Unicode 编码来避免Beautiful Soup解析编码出错

正则表达式

如果传入正则表达式作为参数,Beautiful Soup会通过正则表达式的 search() 来匹配内容.下面例子中找出所有以b开头的标签,这表示<body>和<b>标签都应该被找到:

import re
for tag in soup.find_all(re.compile("^b")):
    print(tag.name)
# body
# b

下面代码找出所有名字中包含”t”的标签:

for tag in soup.find_all(re.compile("t")):
    print(tag.name)
# html
# title

列表

如果传入列表参数,Beautiful Soup会将与列表中任一元素匹配的内容返回.下面代码找到文档中所有<a>标签和<b>标签:

soup.find_all(["a", "b"])
# [<b>The Dormouse's story</b>,
#  <a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
#  <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
#  <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

True

True 可以匹配任何值,下面代码查找到所有的tag,但是不会返回字符串节点

for tag in soup.find_all(True):
    print(tag.name)
# html
# head
# title
# body
# p
# b
# p
# a
# a
# a
# p

方法

如果没有合适过滤器,那么还可以定义一个方法,方法只接受一个元素参数,如果这个方法返回 True 表示当前元素匹配并且被找到,如果不是则反回 False

下面方法校验了当前元素,如果包含 class 属性却不包含 id 属性,那么将返回 True:

def has_class_but_no_id(tag):
    return tag.has_attr('class') and not tag.has_attr('id')

将这个方法作为参数传入 find_all() 方法,将得到所有<p>标签:

soup.find_all(has_class_but_no_id)
# [<p class="title"><b>The Dormouse's story</b></p>,
#  <p class="story">Once upon a time there were...</p>,
#  <p class="story">...</p>]

返回结果中只有<p>标签没有<a>标签,因为<a>标签还定义了”id”,没有返回<html>和<head>,因为<html>和<head>中没有定义”class”属性.

通过一个方法来过滤一类标签属性的时候, 这个方法的参数是要被过滤的属性的值, 而不是这个标签. 下面的例子是找出 href 属性不符合指定正则的 a 标签.

def not_lacie(href):
        return href and not re.compile("lacie").search(href)
soup.find_all(href=not_lacie)
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
#  <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

标签过滤方法可以使用复杂方法. 下面的例子可以过滤出前后都有文字的标签.

from bs4 import NavigableString
def surrounded_by_strings(tag):
    return (isinstance(tag.next_element, NavigableString)
            and isinstance(tag.previous_element, NavigableString))

for tag in soup.find_all(surrounded_by_strings):
    print tag.name
# p
# a
# a
# a
# p

现在来了解一下搜索方法的细节

find_all()

find_all( name , attrs , recursive , string , **kwargs )

find_all() 方法搜索当前tag的所有tag子节点,并判断是否符合过滤器的条件.这里有几个例子:

soup.find_all("title")
# [<title>The Dormouse's story</title>]

soup.find_all("p", "title")
# [<p class="title"><b>The Dormouse's story</b></p>]

soup.find_all("a")
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
#  <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
#  <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

soup.find_all(id="link2")
# [<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>]

import re
soup.find(string=re.compile("sisters"))
# u'Once upon a time there were three little sisters; and their names were\n'

有几个方法很相似,还有几个方法是新的,参数中的 stringid 是什么含义? 为什么 find_all("p", "title") 返回的是CSS Class为”title”的<p>标签? 我们来仔细看一下 find_all() 的参数

name 参数

name 参数可以查找所有名字为 name 的tag,字符串对象会被自动忽略掉.

简单的用法如下:

soup.find_all("title")
# [<title>The Dormouse's story</title>]

重申: 搜索 name 参数的值可以使任一类型的 过滤器 ,字符窜,正则表达式,列表,方法或是 True .

keyword 参数

如果一个指定名字的参数不是搜索内置的参数名,搜索时会把该参数当作指定名字tag的属性来搜索,如果包含一个名字为 id 的参数,Beautiful Soup会搜索每个tag的”id”属性.

soup.find_all(id='link2')
# [<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>]

如果传入 href 参数,Beautiful Soup会搜索每个tag的”href”属性:

soup.find_all(href=re.compile("elsie"))
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>]

搜索指定名字的属性时可以使用的参数值包括 字符串 , 正则表达式 , 列表, True .

下面的例子在文档树中查找所有包含 id 属性的tag,无论 id 的值是什么:

soup.find_all(id=True)
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
#  <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
#  <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

使用多个指定名字的参数可以同时过滤tag的多个属性:

soup.find_all(href=re.compile("elsie"), id='link1')
# [<a class="sister" href="http://example.com/elsie" id="link1">three</a>]

有些tag属性在搜索不能使用,比如HTML5中的 data-* 属性:

data_soup = BeautifulSoup('<div data-foo="value">foo!</div>')
data_soup.find_all(data-foo="value")
# SyntaxError: keyword can't be an expression

但是可以通过 find_all() 方法的 attrs 参数定义一个字典参数来搜索包含特殊属性的tag:

data_soup.find_all(attrs={"data-foo": "value"})
# [<div data-foo="value">foo!</div>]

按CSS搜索

按照CSS类名搜索tag的功能非常实用,但标识CSS类名的关键字 class 在Python中是保留字,使用 class 做参数会导致语法错误.从Beautiful Soup的4.1.1版本开始,可以通过 class_ 参数搜索有指定CSS类名的tag:

soup.find_all("a", class_="sister")
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
#  <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
#  <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

class_ 参数同样接受不同类型的 过滤器 ,字符串,正则表达式,方法或 True :

soup.find_all(class_=re.compile("itl"))
# [<p class="title"><b>The Dormouse's story</b></p>]

def has_six_characters(css_class):
    return css_class is not None and len(css_class) == 6

soup.find_all(class_=has_six_characters)
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
#  <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
#  <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

tag的 class 属性是 多值属性 .按照CSS类名搜索tag时,可以分别搜索tag中的每个CSS类名:

css_soup = BeautifulSoup('<p class="body strikeout"></p>')
css_soup.find_all("p", class_="strikeout")
# [<p class="body strikeout"></p>]

css_soup.find_all("p", class_="body")
# [<p class="body strikeout"></p>]

搜索 class 属性时也可以通过CSS值完全匹配:

css_soup.find_all("p", class_="body strikeout")
# [<p class="body strikeout"></p>]

完全匹配 class 的值时,如果CSS类名的顺序与实际不符,将搜索不到结果:

soup.find_all("a", attrs={"class": "sister"})
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
#  <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
#  <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

string 参数

通过 string 参数可以搜搜文档中的字符串内容.与 name 参数的可选值一样, string 参数接受 字符串, 正则表达式 , 列表, True . 看例子:

soup.find_all(string="Elsie")
# [u'Elsie']

soup.find_all(string=["Tillie", "Elsie", "Lacie"])
# [u'Elsie', u'Lacie', u'Tillie']

soup.find_all(string=re.compile("Dormouse"))
[u"The Dormouse's story", u"The Dormouse's story"]

def is_the_only_string_within_a_tag(s):
    ""Return True if this string is the only child of its parent tag.""
    return (s == s.parent.string)

soup.find_all(string=is_the_only_string_within_a_tag)
# [u"The Dormouse's story", u"The Dormouse's story", u'Elsie', u'Lacie', u'Tillie', u'...']

虽然 string 参数用于搜索字符串,还可以与其它参数混合使用来过滤tag.Beautiful Soup会找到 .string 方法与 string 参数值相符的tag.下面代码用来搜索内容里面包含“Elsie”的<a>标签:

soup.find_all("a", string="Elsie")
# [<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>]

limit 参数

find_all() 方法返回全部的搜索结构,如果文档树很大那么搜索会很慢.如果我们不需要全部结果,可以使用 limit 参数限制返回结果的数量.效果与SQL中的limit关键字类似,当搜索到的结果数量达到 limit 的限制时,就停止搜索返回结果.

文档树中有3个tag符合搜索条件,但结果只返回了2个,因为我们限制了返回数量:

soup.find_all("a", limit=2)
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
#  <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>]

recursive 参数

调用tag的 find_all() 方法时,Beautiful Soup会检索当前tag的所有子孙节点,如果只想搜索tag的直接子节点,可以使用参数 recursive=False .

一段简单的文档:

<html>
 <head>
  <title>
   The Dormouse's story
  </title>
 </head>
...

是否使用 recursive 参数的搜索结果:

soup.html.find_all("title")
# [<title>The Dormouse's story</title>]

soup.html.find_all("title", recursive=False)
# []

这是文档片段

<html>
        <head>
        <title>
        The Dormouse's story
    </title>
        </head>
        ...

<title>标签在 <html> 标签下, 但并不是直接子节点, <head> 标签才是直接子节点. 在允许查询所有后代节点时 Beautiful Soup 能够查找到 <title> 标签. 但是使用了 recursive=False 参数之后,只能查找直接子节点,这样就查不到 <title> 标签了.

Beautiful Soup 提供了多种DOM树搜索方法. 这些方法都使用了类似的参数定义. 比如这些方法: find_all(): name, attrs, text, limit. 但是只有 find_all()find() 支持 recursive 参数.

像调用 find_all() 一样调用tag

find_all() 几乎是Beautiful Soup中最常用的搜索方法,所以我们定义了它的简写方法. BeautifulSoup 对象和 tag 对象可以被当作一个方法来使用,这个方法的执行结果与调用这个对象的 find_all() 方法相同,下面两行代码是等价的:

soup.find_all("a")
soup("a")

这两行代码也是等价的:

soup.title.find_all(string=True)
soup.title(string=True)

find()

find( name , attrs , recursive , string , **kwargs )

find_all() 方法将返回文档中符合条件的所有tag,尽管有时候我们只想得到一个结果.比如文档中只有一个<body>标签,那么使用 find_all() 方法来查找<body>标签就不太合适, 使用 find_all 方法并设置 limit=1 参数不如直接使用 find() 方法.下面两行代码是等价的:

soup.find_all('title', limit=1)
# [<title>The Dormouse's story</title>]

soup.find('title')
# <title>The Dormouse's story</title>

唯一的区别是 find_all() 方法的返回结果是值包含一个元素的列表,而 find() 方法直接返回结果.

find_all() 方法没有找到目标是返回空列表, find() 方法找不到目标时,返回 None .

print(soup.find("nosuchtag"))
# None

soup.head.title 是 tag的名字 方法的简写.这个简写的原理就是多次调用当前tag的 find() 方法:

soup.head.title
# <title>The Dormouse's story</title>

soup.find("head").find("title")
# <title>The Dormouse's story</title>

find_parents() 和 find_parent()

find_parents( name , attrs , recursive , string , **kwargs )

find_parent( name , attrs , recursive , string , **kwargs )

我们已经用了很大篇幅来介绍 find_all()find() 方法,Beautiful Soup中还有10个用于搜索的API.它们中的五个用的是与 find_all() 相同的搜索参数,另外5个与 find() 方法的搜索参数类似.区别仅是它们搜索文档的不同部分.

记住: find_all()find() 只搜索当前节点的所有子节点,孙子节点等. find_parents()find_parent() 用来搜索当前节点的父辈节点,搜索方法与普通tag的搜索方法相同,搜索文档搜索文档包含的内容. 我们从一个文档中的一个叶子节点开始:

a_string = soup.find(string="Lacie")
a_string
# u'Lacie'

a_string.find_parents("a")
# [<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>]

a_string.find_parent("p")
# <p class="story">Once upon a time there were three little sisters; and their names were
#  <a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
#  <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a> and
#  <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>;
#  and they lived at the bottom of a well.</p>

a_string.find_parents("p", class="title")
# []

文档中的一个<a>标签是是当前叶子节点的直接父节点,所以可以被找到.还有一个<p>标签,是目标叶子节点的间接父辈节点,所以也可以被找到.包含class值为”title”的<p>标签不是不是目标叶子节点的父辈节点,所以通过 find_parents() 方法搜索不到.

find_parent()find_parents() 方法会让人联想到 .parent 和 .parents 属性.它们之间的联系非常紧密.搜索父辈节点的方法实际上就是对 .parents 属性的迭代搜索.

find_next_siblings() 和 find_next_sibling()

find_next_siblings( name , attrs , recursive , string , **kwargs )

find_next_sibling( name , attrs , recursive , string , **kwargs )

这2个方法通过 .next_siblings 属性对当tag的所有后面解析的兄弟tag节点进行迭代, find_next_siblings() 方法返回所有符合条件的后面的兄弟节点, find_next_sibling() 只返回符合条件的后面的第一个tag节点.

first_link = soup.a
first_link
# <a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>

first_link.find_next_siblings("a")
# [<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
#  <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

first_story_paragraph = soup.find("p", "story")
first_story_paragraph.find_next_sibling("p")
# <p class="story">...</p>

find_previous_siblings() 和 find_previous_sibling()

find_previous_siblings( name , attrs , recursive , string , **kwargs )

find_previous_sibling( name , attrs , recursive , string , **kwargs )

这2个方法通过 .previous_siblings 属性对当前tag的前面解析的兄弟tag节点进行迭代, find_previous_siblings() 方法返回所有符合条件的前面的兄弟节点, find_previous_sibling() 方法返回第一个符合条件的前面的兄弟节点:

last_link = soup.find("a", id="link3")
last_link
# <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>

last_link.find_previous_siblings("a")
# [<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
#  <a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>]

first_story_paragraph = soup.find("p", "story")
first_story_paragraph.find_previous_sibling("p")
# <p class="title"><b>The Dormouse's story</b></p>

find_all_next() 和 find_next()

find_all_next( name , attrs , recursive , string , **kwargs )

find_nex( name , attrs , recursive , string , **kwargs )

这2个方法通过 .next_elements 属性对当前tag的之后的tag和字符串进行迭代, find_all_next() 方法返回所有符合条件的节点, find_next() 方法返回第一个符合条件的节点:

first_link = soup.a
first_link
# <a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>

first_link.find_all_next(string=True)
# [u'Elsie', u',\n', u'Lacie', u' and\n', u'Tillie',
#  u';\nand they lived at the bottom of a well.', u'\n\n', u'...', u'\n']

first_link.find_next("p")
# <p class="story">...</p>

第一个例子中,字符串 “Elsie”也被显示出来,尽管它被包含在我们开始查找的<a>标签的里面.第二个例子中,最后一个<p>标签也被显示出来,尽管它与我们开始查找位置的<a>标签不属于同一部分.例子中,搜索的重点是要匹配过滤器的条件,并且在文档中出现的顺序而不是开始查找的元素的位置.

find_all_previous() 和 find_previous()

find_all_previous( name , attrs , recursive , string , **kwargs )

find_previous( name , attrs , recursive , string , **kwargs )

这2个方法通过 .previous_elements 属性对当前节点前面的tag和字符串进行迭代, find_all_previous() 方法返回所有符合条件的节点, find_previous() 方法返回第一个符合条件的节点.

first_link = soup.a
first_link
# <a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>

first_link.find_all_previous("p")
# [<p class="story">Once upon a time there were three little sisters; ...</p>,
#  <p class="title"><b>The Dormouse's story</b></p>]

first_link.find_previous("title")
# <title>The Dormouse's story</title>

find_all_previous("p") 返回了文档中的第一段(class=”title”的那段),但还返回了第二段,<p>标签包含了我们开始查找的<a>标签.不要惊讶,这段代码的功能是查找所有出现在指定<a>标签之前的<p>标签,因为这个<p>标签包含了开始的<a>标签,所以<p>标签一定是在<a>之前出现的.

CSS选择器

Beautiful Soup支持大部分的CSS选择器 http://www.w3.org/TR/CSS2/selector.html, 在 TagBeautifulSoup 对象的 .select() 方法中传入字符串参数, 即可使用CSS选择器的语法找到tag:

soup.select("title")
# [<title>The Dormouse's story</title>]

soup.select("p:nth-of-type(3)")
# [<p class="story">...</p>]

通过tag标签逐层查找:

soup.select("body a")
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
#  <a class="sister" href="http://example.com/lacie"  id="link2">Lacie</a>,
#  <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

soup.select("html head title")
# [<title>The Dormouse's story</title>]

找到某个tag标签下的直接子标签:

soup.select("head > title")
# [<title>The Dormouse's story</title>]

soup.select("p > a")
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
#  <a class="sister" href="http://example.com/lacie"  id="link2">Lacie</a>,
#  <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

soup.select("p > a:nth-of-type(2)")
# [<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>]

soup.select("p > #link1")
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>]

soup.select("body > a")
# []

找到兄弟节点标签:

soup.select("#link1 ~ .sister")
# [<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
#  <a class="sister" href="http://example.com/tillie"  id="link3">Tillie</a>]

soup.select("#link1 + .sister")
# [<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>]

通过CSS的类名查找:

soup.select(".sister")
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
#  <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
#  <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

soup.select("[class~=sister]")
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
#  <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
#  <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

通过tag的id查找:

soup.select("#link1")
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>]

soup.select("a#link2")
# [<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>]

同时用多种CSS选择器查询元素:

soup.select("#link1,#link2")
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
#  <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>]

通过是否存在某个属性来查找:

soup.select('a[href]')
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
#  <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
#  <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

通过属性的值来查找:

soup.select('a[href="http://example.com/elsie"]')
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>]

soup.select('a[href^="http://example.com/"]')
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
#  <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
#  <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

soup.select('a[href$="tillie"]')
# [<a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

soup.select('a[href*=".com/el"]')
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>]

通过语言设置来查找:

multilingual_markup = """
 <p lang="en">Hello</p>
 <p lang="en-us">Howdy, y'all</p>
 <p lang="en-gb">Pip-pip, old fruit</p>
 <p lang="fr">Bonjour mes amis</p>
"""
multilingual_soup = BeautifulSoup(multilingual_markup)
multilingual_soup.select('p[lang|=en]')
# [<p lang="en">Hello</p>,
#  <p lang="en-us">Howdy, y'all</p>,
#  <p lang="en-gb">Pip-pip, old fruit</p>]

返回查找到的元素的第一个

soup.select_one(".sister")
# <a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>

对于熟悉CSS选择器语法的人来说这是个非常方便的方法.Beautiful Soup也支持CSS选择器API, 如果你仅仅需要CSS选择器的功能,那么直接使用 lxml 也可以, 而且速度更快,支持更多的CSS选择器语法,但Beautiful Soup整合了CSS选择器的语法和自身方便使用API.

修改文档树

Beautiful Soup的强项是文档树的搜索,但同时也可以方便的修改文档树

修改tag的名称和属性

在 Attributes 的章节中已经介绍过这个功能,但是再看一遍也无妨. 重命名一个tag,改变属性的值,添加或删除属性:

soup = BeautifulSoup('<b class="boldest">Extremely bold</b>')
tag = soup.b

tag.name = "blockquote"
tag['class'] = 'verybold'
tag['id'] = 1
tag
# <blockquote class="verybold" id="1">Extremely bold</blockquote>

del tag['class']
del tag['id']
tag
# <blockquote>Extremely bold</blockquote>

修改 .string

给tag的 .string 属性赋值,就相当于用当前的内容替代了原来的内容:

markup = '<a href="http://example.com/">I linked to <i>example.com</i></a>'
soup = BeautifulSoup(markup)

tag = soup.a
tag.string = "New link text."
tag
# <a href="http://example.com/">New link text.</a>

注意: 如果当前的tag包含了其它tag,那么给它的 .string 属性赋值会覆盖掉原有的所有内容包括子tag

append()

Tag.append() 方法想tag中添加内容,就好像Python的列表的 .append() 方法:

soup = BeautifulSoup("<a>Foo</a>")
soup.a.append("Bar")

soup
# <html><head></head><body><a>FooBar</a></body></html>
soup.a.contents
# [u'Foo', u'Bar']

如果想添加一段文本内容到文档中也没问题,可以调用Python的 append() 方法 或调用 NavigableString 的构造方法:

soup = BeautifulSoup("<b></b>")
tag = soup.b
tag.append("Hello")
new_string = NavigableString(" there")
tag.append(new_string)
tag
# <b>Hello there.</b>
tag.contents
# [u'Hello', u' there']

如果想要创建一段注释,或 NavigableString 的任何子类, 只要调用 NavigableString 的构造方法:

from bs4 import Comment
new_comment = soup.new_string("Nice to see you.", Comment)
tag.append(new_comment)
tag
# <b>Hello there<!--Nice to see you.--></b>
tag.contents
# [u'Hello', u' there', u'Nice to see you.']

# 这是Beautiful Soup 4.2.1 中新增的方法

创建一个tag最好的方法是调用工厂方法 BeautifulSoup.new_tag() :

soup = BeautifulSoup("<b></b>")
original_tag = soup.b

new_tag = soup.new_tag("a", href="http://www.example.com")
original_tag.append(new_tag)
original_tag
# <b><a href="http://www.example.com"></a></b>

new_tag.string = "Link text."
original_tag
# <b><a href="http://www.example.com">Link text.</a></b>

第一个参数作为tag的name,是必填,其它参数选填

insert()

Tag.insert() 方法与 Tag.append() 方法类似,区别是不会把新元素添加到父节点 .contents 属性的最后,而是把元素插入到指定的位置.与Python列表总的 .insert() 方法的用法下同:

markup = '<a href="http://example.com/">I linked to <i>example.com</i></a>'
soup = BeautifulSoup(markup)
tag = soup.a

tag.insert(1, "but did not endorse ")
tag
# <a href="http://example.com/">I linked to but did not endorse <i>example.com</i></a>
tag.contents
# [u'I linked to ', u'but did not endorse', <i>example.com</i>]

insert_before() 和 insert_after()

insert_before() 方法在当前tag或文本节点前插入内容:

soup = BeautifulSoup("<b>stop</b>")
tag = soup.new_tag("i")
tag.string = "Don't"
soup.b.string.insert_before(tag)
soup.b
# <b><i>Don't</i>stop</b>

insert_after() 方法在当前tag或文本节点后插入内容:

soup.b.i.insert_after(soup.new_string(" ever "))
soup.b
# <b><i>Don't</i> ever stop</b>
soup.b.contents
# [<i>Don't</i>, u' ever ', u'stop']

clear()

Tag.clear() 方法移除当前tag的内容:

markup = '<a href="http://example.com/">I linked to <i>example.com</i></a>'
soup = BeautifulSoup(markup)
tag = soup.a

tag.clear()
tag
# <a href="http://example.com/"></a>

extract()

PageElement.extract() 方法将当前tag移除文档树,并作为方法结果返回:

markup = '<a href="http://example.com/">I linked to <i>example.com</i></a>'
soup = BeautifulSoup(markup)
a_tag = soup.a

i_tag = soup.i.extract()

a_tag
# <a href="http://example.com/">I linked to</a>

i_tag
# <i>example.com</i>

print(i_tag.parent)
None

这个方法实际上产生了2个文档树: 一个是用来解析原始文档的 BeautifulSoup 对象,另一个是被移除并且返回的tag.被移除并返回的tag可以继续调用 extract 方法:

my_string = i_tag.string.extract()
my_string
# u'example.com'

print(my_string.parent)
# None
i_tag
# <i></i>

decompose()

Tag.decompose() 方法将当前节点移除文档树并完全销毁:

markup = '<a href="http://example.com/">I linked to <i>example.com</i></a>'
soup = BeautifulSoup(markup)
a_tag = soup.a

soup.i.decompose()

a_tag
# <a href="http://example.com/">I linked to</a>

replace_with()

PageElement.replace_with() 方法移除文档树中的某段内容,并用新tag或文本节点替代它:

markup = '<a href="http://example.com/">I linked to <i>example.com</i></a>'
soup = BeautifulSoup(markup)
a_tag = soup.a

new_tag = soup.new_tag("b")
new_tag.string = "example.net"
a_tag.i.replace_with(new_tag)

a_tag
# <a href="http://example.com/">I linked to <b>example.net</b></a>

replace_with() 方法返回被替代的tag或文本节点,可以用来浏览或添加到文档树其它地方

wrap()

PageElement.wrap() 方法可以对指定的tag元素进行包装,并返回包装后的结果:

soup = BeautifulSoup("<p>I wish I was bold.</p>")
soup.p.string.wrap(soup.new_tag("b"))
# <b>I wish I was bold.</b>

soup.p.wrap(soup.new_tag("div"))
# <div><p><b>I wish I was bold.</b></p></div>

该方法在 Beautiful Soup 4.0.5 中添加

unwrap()

Tag.unwrap() 方法与 wrap() 方法相反.将移除tag内的所有tag标签,该方法常被用来进行标记的解包:

markup = '<a href="http://example.com/">I linked to <i>example.com</i></a>'
soup = BeautifulSoup(markup)
a_tag = soup.a

a_tag.i.unwrap()
a_tag
# <a href="http://example.com/">I linked to example.com</a>

replace_with() 方法相同, unwrap() 方法返回被移除的tag

输出

格式化输出

prettify() 方法将Beautiful Soup的文档树格式化后以Unicode编码输出,每个XML/HTML标签都独占一行

markup = '<a href="http://example.com/">I linked to <i>example.com</i></a>'
soup = BeautifulSoup(markup)
soup.prettify()
# '<html>\n <head>\n </head>\n <body>\n  <a href="http://example.com/">\n...'

print(soup.prettify())
# <html>
#  <head>
#  </head>
#  <body>
#   <a href="http://example.com/">
#    I linked to
#    <i>
#     example.com
#    </i>
#   </a>
#  </body>
# </html>

BeautifulSoup 对象和它的tag节点都可以调用 prettify() 方法:

print(soup.a.prettify())
# <a href="http://example.com/">
#  I linked to
#  <i>
#   example.com
#  </i>
# </a>

压缩输出

如果只想得到结果字符串,不重视格式,那么可以对一个 BeautifulSoup 对象或 Tag 对象使用Python的 unicode()str() 方法:

str(soup)
# '<html><head></head><body><a href="http://example.com/">I linked to <i>example.com</i></a></body></html>'

unicode(soup.a)
# u'<a href="http://example.com/">I linked to <i>example.com</i></a>'

str() 方法返回UTF-8编码的字符串,可以指定 编码 的设置.

还可以调用 encode() 方法获得字节码或调用 decode() 方法获得Unicode.

输出格式

Beautiful Soup输出是会将HTML中的特殊字符转换成Unicode,比如“&lquot;”:

soup = BeautifulSoup("&ldquo;Dammit!&rdquo; he said.")
unicode(soup)
# u'<html><head></head><body>\u201cDammit!\u201d he said.</body></html>'

如果将文档转换成字符串,Unicode编码会被编码成UTF-8.这样就无法正确显示HTML特殊字符了:

str(soup)
# '<html><head></head><body>\xe2\x80\x9cDammit!\xe2\x80\x9d he said.</body></html>'

get_text()

如果只想得到tag中包含的文本内容,那么可以调用 get_text() 方法,这个方法获取到tag中包含的所有文版内容包括子孙tag中的内容,并将结果作为Unicode字符串返回:

markup = '<a href="http://example.com/">\nI linked to <i>example.com</i>\n</a>'
soup = BeautifulSoup(markup)

soup.get_text()
u'\nI linked to example.com\n'
soup.i.get_text()
u'example.com'

可以通过参数指定tag的文本内容的分隔符:

# soup.get_text("|")
u'\nI linked to |example.com|\n'

还可以去除获得文本内容的前后空白:

# soup.get_text("|", strip=True)
u'I linked to|example.com'

或者使用 .stripped_strings 生成器,获得文本列表后手动处理列表:

[text for text in soup.stripped_strings]
# [u'I linked to', u'example.com']

指定文档解析器

如果仅是想要解析HTML文档,只要用文档创建 BeautifulSoup 对象就可以了.Beautiful Soup会自动选择一个解析器来解析文档.但是还可以通过参数指定使用那种解析器来解析当前文档.

BeautifulSoup 第一个参数应该是要被解析的文档字符串或是文件句柄,第二个参数用来标识怎样解析文档.如果第二个参数为空,那么Beautiful Soup根据当前系统安装的库自动选择解析器,解析器的优先数序: lxml, html5lib, Python标准库.在下面两种条件下解析器优先顺序会变化:

  • 要解析的文档是什么类型: 目前支持, “html”, “xml”, 和 “html5”
  • 指定使用哪种解析器: 目前支持, “lxml”, “html5lib”, 和 “html.parser”

安装解析器 章节介绍了可以使用哪种解析器,以及如何安装.

如果指定的解析器没有安装,Beautiful Soup会自动选择其它方案.目前只有 lxml 解析器支持XML文档的解析,在没有安装lxml库的情况下,创建 beautifulsoup 对象时无论是否指定使用lxml,都无法得到解析后的对象

解析器之间的区别

Beautiful Soup为不同的解析器提供了相同的接口,但解析器本身时有区别的.同一篇文档被不同的解析器解析后可能会生成不同结构的树型文档.区别最大的是HTML解析器和XML解析器,看下面片段被解析成HTML结构:

BeautifulSoup("<a><b /></a>")
# <html><head></head><body><a><b></b></a></body></html>

因为空标签<b />不符合HTML标准,所以解析器把它解析成<b></b>

同样的文档使用XML解析如下(解析XML需要安装lxml库).注意,空标签<b />依然被保留,并且文档前添加了XML头,而不是被包含在<html>标签内:

BeautifulSoup("<a><b /></a>", "xml")
# <?xml version="1.0" encoding="utf-8"?>
# <a><b/></a>

HTML解析器之间也有区别,如果被解析的HTML文档是标准格式,那么解析器之间没有任何差别,只是解析速度不同,结果都会返回正确的文档树.

但是如果被解析文档不是标准格式,那么不同的解析器返回结果可能不同.下面例子中,使用lxml解析错误格式的文档,结果</p>标签被直接忽略掉了:

BeautifulSoup("<a></p>", "lxml")
# <html><body><a></a></body></html>

使用html5lib库解析相同文档会得到不同的结果:

BeautifulSoup("<a></p>", "html5lib")
# <html><head></head><body><a><p></p></a></body></html>

html5lib库没有忽略掉</p>标签,而是自动补全了标签,还给文档树添加了<head>标签.

使用pyhton内置库解析结果如下:

BeautifulSoup("<a></p>", "html.parser")
# <a></a>

与lxml库类似的,Python内置库忽略掉了</p>标签,与html5lib库不同的是标准库没有尝试创建符合标准的文档格式或将文档片段包含在<body>标签内,与lxml不同的是标准库甚至连<html>标签都没有尝试去添加.

因为文档片段“<a></p>”是错误格式,所以以上解析方式都能算作”正确”,html5lib库使用的是HTML5的部分标准,所以最接近”正确”.不过所有解析器的结构都能够被认为是”正常”的.

不同的解析器可能影响代码执行结果,如果在分发给别人的代码中使用了 BeautifulSoup ,那么最好注明使用了哪种解析器,以减少不必要的麻烦.

编码

任何HTML或XML文档都有自己的编码方式,比如ASCII 或 UTF-8,但是使用Beautiful Soup解析后,文档都被转换成了Unicode:

markup = "<h1>Sacr\xc3\xa9 bleu!</h1>"
soup = BeautifulSoup(markup)
soup.h1
# <h1>Sacré bleu!</h1>
soup.h1.string
# u'Sacr\xe9 bleu!'

这不是魔术(但很神奇),Beautiful Soup用了 编码自动检测 子库来识别当前文档编码并转换成Unicode编码. BeautifulSoup 对象的 .original_encoding 属性记录了自动识别编码的结果:

soup.original_encoding
'utf-8'

编码自动检测 功能大部分时候都能猜对编码格式,但有时候也会出错.有时候即使猜测正确,也是在逐个字节的遍历整个文档后才猜对的,这样很慢.如果预先知道文档编码,可以设置编码参数来减少自动检查编码出错的概率并且提高文档解析速度.在创建 BeautifulSoup 对象的时候设置 from_encoding 参数.

下面一段文档用了ISO-8859-8编码方式,这段文档太短,结果Beautiful Soup以为文档是用ISO-8859-7编码:

markup = b"<h1>\xed\xe5\xec\xf9</h1>"
soup = BeautifulSoup(markup)
soup.h1
<h1>νεμω</h1>
soup.original_encoding
'ISO-8859-7'

通过传入 from_encoding 参数来指定编码方式:

soup = BeautifulSoup(markup, from_encoding="iso-8859-8")
soup.h1
<h1>םולש</h1>
soup.original_encoding
'iso8859-8'

如果仅知道文档采用了Unicode编码, 但不知道具体编码. 可以先自己猜测, 猜测错误(依旧是乱码)时, 可以把错误编码作为 exclude_encodings 参数, 这样文档就不会尝试使用这种编码了解码了. 译者备注: 在没有指定编码的情况下, BS会自己猜测编码, 把不正确的编码排除掉, BS就更容易猜到正确编码.

soup = BeautifulSoup(markup, exclude_encodings=["ISO-8859-7"])
soup.h1
<h1>םולש</h1>
soup.original_encoding
'WINDOWS-1255'

猜测结果是 Windows-1255 编码, 猜测结果可能不够准确, 但是 Windows-1255 编码是 ISO-8859-8 的扩展集, 所以猜测结果已经十分接近了, 并且不影响使用. (exclude_encodings 参数是 4.4.0版本的新功能)

少数情况下(通常是UTF-8编码的文档中包含了其它编码格式的文件),想获得正确的Unicode编码就不得不将文档中少数特殊编码字符替换成特殊Unicode编码,“REPLACEMENT CHARACTER” (U+FFFD, �). 如果Beautifu Soup猜测文档编码时作了特殊字符的替换,那么Beautiful Soup会把 UnicodeDammitBeautifulSoup 对象的 .contains_replacement_characters 属性标记为 True .这样就可以知道当前文档进行Unicode编码后丢失了一部分特殊内容字符.如果文档中包含�而 .contains_replacement_characters 属性是 False ,则表示�就是文档中原来的字符,不是转码失败.

输出编码

通过Beautiful Soup输出文档时,不管输入文档是什么编码方式,输出编码均为UTF-8编码,下面例子输入文档是Latin-1编码:

markup = b'''
<html>
  <head>
    <meta content="text/html; charset=ISO-Latin-1" http-equiv="Content-type" />
  </head>
  <body>
    <p>Sacr\xe9 bleu!</p>
  </body>
</html>
'''

soup = BeautifulSoup(markup)
print(soup.prettify())
# <html>
#  <head>
#   <meta content="text/html; charset=utf-8" http-equiv="Content-type" />
#  </head>
#  <body>
#   <p>
#    Sacré bleu!
#   </p>
#  </body>
# </html>

注意,输出文档中的<meta>标签的编码设置已经修改成了与输出编码一致的UTF-8.

如果不想用UTF-8编码输出,可以将编码方式传入 prettify() 方法:

print(soup.prettify("latin-1"))
# <html>
#  <head>
#   <meta content="text/html; charset=latin-1" http-equiv="Content-type" />
# ...

还可以调用 BeautifulSoup 对象或任意节点的 encode() 方法,就像Python的字符串调用 encode() 方法一样:

soup.p.encode("latin-1")
# '<p>Sacr\xe9 bleu!</p>'

soup.p.encode("utf-8")
# '<p>Sacr\xc3\xa9 bleu!</p>'

如果文档中包含当前编码不支持的字符,那么这些字符将被转换成一系列XML特殊字符引用,下面例子中包含了Unicode编码字符SNOWMAN:

markup = u"<b>\N{SNOWMAN}</b>"
snowman_soup = BeautifulSoup(markup)
tag = snowman_soup.b

SNOWMAN字符在UTF-8编码中可以正常显示(看上去像是☃),但有些编码不支持SNOWMAN字符,比如ISO-Latin-1或ASCII,那么在这些编码中SNOWMAN字符会被转换成“&#9731”:

print(tag.encode("utf-8"))
# <b>☃</b>

print tag.encode("latin-1")
# <b>&#9731;</b>

print tag.encode("ascii")
# <b>&#9731;</b>

Unicode, Dammit!

UnicodeDammit 是BS内置库, 主要用来猜测文档编码.

编码自动检测 功能可以在Beautiful Soup以外使用,检测某段未知编码时,可以使用这个方法:

from bs4 import UnicodeDammit
dammit = UnicodeDammit("Sacr\xc3\xa9 bleu!")
print(dammit.unicode_markup)
# Sacré bleu!
dammit.original_encoding
# 'utf-8'

如果Python中安装了 chardetcchardet 那么编码检测功能的准确率将大大提高. 输入的字符越多,检测结果越精确,如果事先猜测到一些可能编码, 那么可以将猜测的编码作为参数,这样将优先检测这些编码:

dammit = UnicodeDammit("Sacr\xe9 bleu!", ["latin-1", "iso-8859-1"])
print(dammit.unicode_markup)
# Sacré bleu!
dammit.original_encoding
# 'latin-1'

编码自动检测 功能中有2项功能是Beautiful Soup库中用不到的

智能引号

使用Unicode时,Beautiful Soup还会智能的把引号 转换成HTML或XML中的特殊字符:

markup = b"<p>I just \x93love\x94 Microsoft Word\x92s smart quotes</p>"

UnicodeDammit(markup, ["windows-1252"], smart_quotes_to="html").unicode_markup
# u'<p>I just &ldquo;love&rdquo; Microsoft Word&rsquo;s smart quotes</p>'

UnicodeDammit(markup, ["windows-1252"], smart_quotes_to="xml").unicode_markup
# u'<p>I just &#x201C;love&#x201D; Microsoft Word&#x2019;s smart quotes</p>'

也可以把引号转换为ASCII码:

UnicodeDammit(markup, ["windows-1252"], smart_quotes_to="ascii").unicode_markup
# u'<p>I just "love" Microsoft Word\'s smart quotes</p>'

很有用的功能,但是Beautiful Soup没有使用这种方式.默认情况下,Beautiful Soup把引号转换成Unicode:

UnicodeDammit(markup, ["windows-1252"]).unicode_markup
# u'<p>I just \u201clove\u201d Microsoft Word\u2019s smart quotes</p>'

矛盾的编码

有时文档的大部分都是用UTF-8,但同时还包含了Windows-1252编码的字符,就像微软的智能引号一样. 一些包含多个信息的来源网站容易出现这种情况. UnicodeDammit.detwingle() 方法可以把这类文档转换成纯UTF-8编码格式,看个简单的例子:

snowmen = (u"\N{SNOWMAN}" * 3)
quote = (u"\N{LEFT DOUBLE QUOTATION MARK}I like snowmen!\N{RIGHT DOUBLE QUOTATION MARK}")
doc = snowmen.encode("utf8") + quote.encode("windows_1252")

这段文档很杂乱,snowmen是UTF-8编码,引号是Windows-1252编码,直接输出时不能同时显示snowmen和引号,因为它们编码不同:

print(doc)
# ☃☃☃�I like snowmen!�

print(doc.decode("windows-1252"))
# ☃☃☃“I like snowmen!”

如果对这段文档用UTF-8解码就会得到 UnicodeDecodeError 异常,如果用Windows-1252解码就回得到一堆乱码. 幸好, UnicodeDammit.detwingle() 方法会把这段字符串转换成UTF-8编码,允许我们同时显示出文档中的snowmen和引号:

new_doc = UnicodeDammit.detwingle(doc)
print(new_doc.decode("utf8"))
# ☃☃☃“I like snowmen!”

UnicodeDammit.detwingle() 方法只能解码包含在UTF-8编码中的Windows-1252编码内容,但这解决了最常见的一类问题.

在创建 BeautifulSoupUnicodeDammit 对象前一定要先对文档调用 UnicodeDammit.detwingle() 确保文档的编码方式正确.如果尝试去解析一段包含Windows-1252编码的UTF-8文档,就会得到一堆乱码,比如: ☃☃☃“I like snowmen!”.

UnicodeDammit.detwingle() 方法在Beautiful Soup 4.1.0版本中新增

比较对象是否相同

两个 NavigableStringTag 对象具有相同的HTML或XML结构时, Beautiful Soup就判断这两个对象相同. 这个例子中, 2个 <b> 标签在 BS 中是相同的, 尽管他们在文档树的不同位置, 但是具有相同的表象: “<b>pizza</b>”

markup = "<p>I want <b>pizza</b> and more <b>pizza</b>!</p>"
soup = BeautifulSoup(markup, 'html.parser')
first_b, second_b = soup.find_all('b')
print first_b == second_b
# True

print first_b.previous_element == second_b.previous_element
# False

如果想判断两个对象是否严格的指向同一个对象可以通过 is 来判断

print first_b is second_b
# False

复制Beautiful Soup对象

copy.copy() 方法可以复制任意 TagNavigableString 对象

import copy
p_copy = copy.copy(soup.p)
print p_copy
# <p>I want <b>pizza</b> and more <b>pizza</b>!</p>

复制后的对象跟与对象是相等的, 但指向不同的内存地址

print soup.p == p_copy
# True

print soup.p is p_copy
# False

源对象和复制对象的区别是源对象在文档树中, 而复制后的对象是独立的还没有添加到文档树中. 复制后对象的效果跟调用了 extract() 方法相同.

print p_copy.parent
# None

这是因为相等的对象不能同时插入相同的位置

解析部分文档

如果仅仅因为想要查找文档中的<a>标签而将整片文档进行解析,实在是浪费内存和时间.最快的方法是从一开始就把<a>标签以外的东西都忽略掉. SoupStrainer 类可以定义文档的某段内容,这样搜索文档时就不必先解析整篇文档,只会解析在 SoupStrainer 中定义过的文档. 创建一个 SoupStrainer 对象并作为 parse_only 参数给 BeautifulSoup 的构造方法即可.

SoupStrainer

SoupStrainer 类接受与典型搜索方法相同的参数:name , attrs , recursive , string , **kwargs 。下面举例说明三种 SoupStrainer 对象:

from bs4 import SoupStrainer

only_a_tags = SoupStrainer("a")

only_tags_with_id_link2 = SoupStrainer(id="link2")

def is_short_string(string):
    return len(string) < 10

only_short_strings = SoupStrainer(string=is_short_string)

再拿“爱丽丝”文档来举例,来看看使用三种 SoupStrainer 对象做参数会有什么不同:

html_doc = """
<html><head><title>The Dormouse's story</title></head>
    <body>
<p class="title"><b>The Dormouse's story</b></p>

<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>

<p class="story">...</p>
"""

print(BeautifulSoup(html_doc, "html.parser", parse_only=only_a_tags).prettify())
# <a class="sister" href="http://example.com/elsie" id="link1">
#  Elsie
# </a>
# <a class="sister" href="http://example.com/lacie" id="link2">
#  Lacie
# </a>
# <a class="sister" href="http://example.com/tillie" id="link3">
#  Tillie
# </a>

print(BeautifulSoup(html_doc, "html.parser", parse_only=only_tags_with_id_link2).prettify())
# <a class="sister" href="http://example.com/lacie" id="link2">
#  Lacie
# </a>

print(BeautifulSoup(html_doc, "html.parser", parse_only=only_short_strings).prettify())
# Elsie
# ,
# Lacie
# and
# Tillie
# ...
#

还可以将 SoupStrainer 作为参数传入 搜索文档树 中提到的方法.这可能不是个常用用法,所以还是提一下:

soup = BeautifulSoup(html_doc)
soup.find_all(only_short_strings)
# [u'\n\n', u'\n\n', u'Elsie', u',\n', u'Lacie', u' and\n', u'Tillie',
#  u'\n\n', u'...', u'\n']

常见问题

代码诊断

如果想知道Beautiful Soup到底怎样处理一份文档,可以将文档传入 diagnose() 方法(Beautiful Soup 4.2.0中新增),Beautiful Soup会输出一份报告,说明不同的解析器会怎样处理这段文档,并标出当前的解析过程会使用哪种解析器:

from bs4.diagnose import diagnose
data = open("bad.html").read()
diagnose(data)

# Diagnostic running on Beautiful Soup 4.2.0
# Python version 2.7.3 (default, Aug  1 2012, 05:16:07)
# I noticed that html5lib is not installed. Installing it may help.
# Found lxml version 2.3.2.0
#
# Trying to parse your data with html.parser
# Here's what html.parser did with the document:
# ...

diagnose() 方法的输出结果可能帮助你找到问题的原因,如果不行,还可以把结果复制出来以便寻求他人的帮助

文档解析错误

文档解析错误有两种.一种是崩溃,Beautiful Soup尝试解析一段文档结果却抛除了异常,通常是 HTMLParser.HTMLParseError .还有一种异常情况,是Beautiful Soup解析后的文档树看起来与原来的内容相差很多.

这些错误几乎都不是Beautiful Soup的原因,这不会是因为Beautiful Soup的代码写的太优秀,而是因为Beautiful Soup没有包含任何文档解析代码.异常产生自被依赖的解析器,如果解析器不能很好的解析出当前的文档,那么最好的办法是换一个解析器.更多细节查看 安装解析器 章节.

最常见的解析错误是 HTMLParser.HTMLParseError: malformed start tagHTMLParser.HTMLParseError: bad end tag .这都是由Python内置的解析器引起的,解决方法是 安装lxml或html5lib

最常见的异常现象是当前文档找不到指定的Tag,而这个Tag光是用眼睛就足够发现的了. find_all() 方法返回 [] ,而 find() 方法返回 None .这是Python内置解析器的又一个问题: 解析器会跳过那些它不知道的tag.解决方法还是 安装lxml或html5lib

版本错误

  • SyntaxError: Invalid syntax (异常位置在代码行: ROOT_TAG_NAME = u'[document]' ),因为Python2语法的代码(没有经过迁移)直接在Python3中运行
  • ImportError: No module named HTMLParser 因为在Python3中执行Python2版本的Beautiful Soup
  • ImportError: No module named html.parser 因为在Python2中执行Python3版本的Beautiful Soup
  • ImportError: No module named BeautifulSoup 因为在没有安装BeautifulSoup3库的Python环境下执行代码,或忘记了BeautifulSoup4的代码需要从 bs4 包中引入
  • ImportError: No module named bs4 因为当前Python环境下还没有安装BeautifulSoup4

解析成XML

默认情况下,Beautiful Soup会将当前文档作为HTML格式解析,如果要解析XML文档,要在 BeautifulSoup 构造方法中加入第二个参数 “xml”:

soup = BeautifulSoup(markup, "xml")

当然,还需要 安装lxml

解析器的错误

  • 如果同样的代码在不同环境下结果不同,可能是因为两个环境下使用不同的解析器造成的.例如这个环境中安装了lxml,而另一个环境中只有html5lib, 解析器之间的区别 中说明了原因.修复方法是在 BeautifulSoup 的构造方法中中指定解析器
  • 因为HTML标签是 大小写敏感 的,所以3种解析器再出来文档时都将tag和属性转换成小写.例如文档中的 <TAG></TAG> 会被转换为 <tag></tag> .如果想要保留tag的大写的话,那么应该将文档 解析成XML .

杂项错误

  • UnicodeEncodeError: 'charmap' codec can't encode character u'\xfoo' in position bar (或其它类型的 UnicodeEncodeError )的错误,主要是两方面的错误(都不是Beautiful Soup的原因),第一种是正在使用的终端(console)无法显示部分Unicode,参考 Python wiki ,第二种是向文件写入时,被写入文件不支持部分Unicode,这时只要用 u.encode("utf8") 方法将编码转换为UTF-8.
  • KeyError: [attr] 因为调用 tag['attr'] 方法而引起,因为这个tag没有定义该属性.出错最多的是 KeyError: 'href'KeyError: 'class' .如果不确定某个属性是否存在时,用 tag.get('attr') 方法去获取它,跟获取Python字典的key一样
  • AttributeError: 'ResultSet' object has no attribute 'foo' 错误通常是因为把 find_all() 的返回结果当作一个tag或文本节点使用,实际上返回结果是一个列表或 ResultSet 对象的字符串,需要对结果进行循环才能得到每个节点的 .foo 属性.或者使用 find() 方法仅获取到一个节点
  • AttributeError: 'NoneType' object has no attribute 'foo' 这个错误通常是在调用了 find() 方法后直节点取某个属性 .foo 但是 find() 方法并没有找到任何结果,所以它的返回值是 None .需要找出为什么 find() 的返回值是 None .

如何提高效率

Beautiful Soup对文档的解析速度不会比它所依赖的解析器更快,如果对计算时间要求很高或者计算机的时间比程序员的时间更值钱,那么就应该直接使用 lxml .

换句话说,还有提高Beautiful Soup效率的办法,使用lxml作为解析器.Beautiful Soup用lxml做解析器比用html5lib或Python内置解析器速度快很多.

安装 cchardet 后文档的解码的编码检测会速度更快

解析部分文档 不会节省多少解析时间,但是会节省很多内存,并且搜索时也会变得更快.

Beautiful Soup 3

Beautiful Soup 3是上一个发布版本,目前已经停止维护.Beautiful Soup 3库目前已经被几个主要的linux平台添加到源里:

$ apt-get install Python-beautifulsoup

在PyPi中分发的包名字是 BeautifulSoup :

$ easy_install BeautifulSoup
$ pip install BeautifulSoup

或通过 Beautiful Soup 3.2.0源码包 安装

Beautiful Soup 3的在线文档查看 这里 .

迁移到BS4

只要一个小变动就能让大部分的Beautiful Soup 3代码使用Beautiful Soup 4的库和方法—-修改 BeautifulSoup 对象的引入方式:

from BeautifulSoup import BeautifulSoup

修改为:

from bs4 import BeautifulSoup
  • 如果代码抛出 ImportError 异常“No module named BeautifulSoup”,原因可能是尝试执行Beautiful Soup 3,但环境中只安装了Beautiful Soup 4库
  • 如果代码跑出 ImportError 异常“No module named bs4”,原因可能是尝试运行Beautiful Soup 4的代码,但环境中只安装了Beautiful Soup 3.

虽然BS4兼容绝大部分BS3的功能,但BS3中的大部分方法已经不推荐使用了,就方法按照 PEP8标准 重新定义了方法名.很多方法都重新定义了方法名,但只有少数几个方法没有向下兼容.

上述内容就是BS3迁移到BS4的注意事项

需要的解析器

Beautiful Soup 3曾使用Python的 SGMLParser 解析器,这个模块在Python3中已经被移除了.Beautiful Soup 4默认使用系统的 html.parser ,也可以使用lxml或html5lib扩展库代替.查看 安装解析器 章节

因为解析器 html.parserSGMLParser 不同. BS4 和 BS3 处理相同的文档会产生不同的对象结构. 使用lxml或html5lib解析文档的时候, 如果添加了 html.parser 参数, 解析的对象又回发生变化. 如果发生了这种情况, 只能修改对应的处文档结果处理代码了.

方法名的变化

  • renderContents -> encode_contents
  • replaceWith -> replace_with
  • replaceWithChildren -> unwrap
  • findAll -> find_all
  • findAllNext -> find_all_next
  • findAllPrevious -> find_all_previous
  • findNext -> find_next
  • findNextSibling -> find_next_sibling
  • findNextSiblings -> find_next_siblings
  • findParent -> find_parent
  • findParents -> find_parents
  • findPrevious -> find_previous
  • findPreviousSibling -> find_previous_sibling
  • findPreviousSiblings -> find_previous_siblings
  • nextSibling -> next_sibling
  • previousSibling -> previous_sibling

Beautiful Soup构造方法的参数部分也有名字变化:

  • BeautifulSoup(parseOnlyThese=...) -> BeautifulSoup(parse_only=...)
  • BeautifulSoup(fromEncoding=...) -> BeautifulSoup(from_encoding=...)

为了适配Python3,修改了一个方法名:

  • Tag.has_key() -> Tag.has_attr()

修改了一个属性名,让它看起来更专业点:

  • Tag.isSelfClosing -> Tag.is_empty_element

修改了下面3个属性的名字,以免雨Python保留字冲突.这些变动不是向下兼容的,如果在BS3中使用了这些属性,那么在BS4中这些代码无法执行.

  • UnicodeDammit.Unicode -> UnicodeDammit.Unicode_markup``
  • Tag.next -> Tag.next_element
  • Tag.previous -> Tag.previous_element

生成器

将下列生成器按照PEP8标准重新命名,并转换成对象的属性:

  • childGenerator() -> children
  • nextGenerator() -> next_elements
  • nextSiblingGenerator() -> next_siblings
  • previousGenerator() -> previous_elements
  • previousSiblingGenerator() -> previous_siblings
  • recursiveChildGenerator() -> descendants
  • parentGenerator() -> parents

所以迁移到BS4版本时要替换这些代码:

for parent in tag.parentGenerator():
    ...

替换为:

for parent in tag.parents:
    ...

(两种调用方法现在都能使用)

BS3中有的生成器循环结束后会返回 None 然后结束.这是个bug.新版生成器不再返回 None .

BS4中增加了2个新的生成器, .strings 和 stripped_strings . .strings 生成器返回NavigableString对象, .stripped_strings 方法返回去除前后空白的Python的string对象.

XML

BS4中移除了解析XML的 BeautifulStoneSoup 类.如果要解析一段XML文档,使用 BeautifulSoup 构造方法并在第二个参数设置为“xml”.同时 BeautifulSoup 构造方法也不再识别 isHTML 参数.

Beautiful Soup处理XML空标签的方法升级了.旧版本中解析XML时必须指明哪个标签是空标签. 构造方法的 selfClosingTags 参数已经不再使用.新版Beautiful Soup将所有空标签解析为空元素,如果向空元素中添加子节点,那么这个元素就不再是空元素了.

实体

HTML或XML实体都会被解析成Unicode字符,Beautiful Soup 3版本中有很多处理实体的方法,在新版中都被移除了. BeautifulSoup 构造方法也不再接受 smartQuotesToconvertEntities 参数. 编码自动检测 方法依然有 smart_quotes_to 参数,但是默认会将引号转换成Unicode.内容配置项 HTML_ENTITIES , XML_ENTITIESXHTML_ENTITIES 在新版中被移除.因为它们代表的特性已经不再被支持.

如果在输出文档时想把Unicode字符转换成HTML实体,而不是输出成UTF-8编码,那就需要用到 输出格式 的方法.

迁移杂项

Tag.string 属性现在是一个递归操作.如果A标签只包含了一个B标签,那么A标签的.string属性值与B标签的.string属性值相同.

多值属性 比如 class 属性包含一个他们的值的列表,而不是一个字符串.这可能会影响到如何按照CSS类名哦搜索tag.

如果使用 find* 方法时同时传入了 string 参数 和 name 参数 .Beautiful Soup会搜索指定name的tag,并且这个tag的 Tag.string 属性包含text参数的内容.结果中不会包含字符串本身.旧版本中Beautiful Soup会忽略掉tag参数,只搜索text参数.

BeautifulSoup 构造方法不再支持 markupMassage 参数.现在由解析器负责文档的解析正确性.

很少被用到的几个解析器方法在新版中被移除,比如 ICantBelieveItsBeautifulSoupBeautifulSOAP .现在由解析器完全负责如何解释模糊不清的文档标记.

prettify() 方法在新版中返回Unicode字符串,不再返回字节流.

爬虫项目推荐

GoPUP

  • GoPUP,项目所采集的数据皆来自公开的数据源,数据接口:百度、谷歌、头条、微博指数,宏观数据,利率数据,货币汇率,千里马、独角兽公司,新闻联播文字稿,影视票房数据,高校名单,疫情数据等等

安装

使用清华的 pip 源

pip install gopup -i https://pypi.tuna.tsinghua.edu.cn/simple

使用

查看文档 http://doc.gopup.cn/#/

import gopup as gp
df_index = gp.weibo_index(word="疫情", time_type="3month")
print(df_index)

weibo-spider

weibo-spider ,是一个微博的爬虫,可以连续的爬取一个或多个微博用户的数据,并将数据写入文件和数据库。支持下载微博中的原始图片/视频、转载图片/视频,Live Photo 中的视频。

安装

安装有两种方式,一种是源码安装,一种是pip

$ git clone https://github.com/dataabc/weiboSpider.git
$ cd weiboSpider
$ pip install -r requirements.txt

或者

python3 -m pip install weibo-spider

使用

  1. 安装后第一次运行命令行 python3 -m weibo_spider ,会自动在当前目录创建config.json配置文件
  2. 修改 config.json 文件中 user_id_list 微博用户 ID。
  3. 再次运行 python3 -m weibo_spider

You-Get

you-get, 提供便利的方式来下载网络上的媒体信息,包括视频、音频、图片,支持 80+ 网站。小编经常用来下载 B 站视频。

安装

在 pip 之前需要安装 FFmpeg (强烈推荐) 或 Libav、(可选) RTMPDump。

pip3 install you-get

升级用 
pip3 install --upgrade you-get

使用

只需要简单的使用 you-get 命令就可以下载视频、图片、音频

you-get 网址

如:

you-get https://www.bilibili.com/video/BV1Dq4y1Z7zC?spm_id_from=333.851.b_7265636f6d6d656e64.1

musicdl

Music-dl ,是一个基于 Python3 的命令行工具,可以从多个网站搜索和下载音乐,解决音乐不知道在哪个网站的问题。Music-dl 的 API 是从公共网络获取的,下载不了 VIP 音乐。

Music-dl 支持 QQ音乐,酷狗音乐,网易云音乐,咪咕音乐和百度音乐。

安装

pip install musicdl

使用

music-dl -k 周杰伦

文章作者: 杰克成
版权声明: 本博客所有文章除特別声明外,均采用 CC BY 4.0 许可协议。转载请注明来源 杰克成 !
评论
  目录