

修改用户、用户组


代码如下;
cluster.name: elasticsearch
node.name: node-1
network.host: 0.0.0.0
http.port: 9200
cluster.initial_master_nodes: ["node-1"]
#修改这个文件,在文件末尾中增加下面的内容
vim /etc/security/limits.conf
#每个进程可以打开的文件数的限制
es soft nofile 65536
es hard nofile 65536
vim /etc/security/limits.d/20-nproc.conf
#修改这个文件,在文件末尾中增加下面的内容
#每个进程可以打开的文件数的限制
es soft nofile 65536
es hard nofile 65536
#操作系统级别对每个用户创建的进程数的限制
* hard nproc 4096
#注:*代表Linux所有用户名称
vim /etc/sysctl.conf
#在文件中增加下面的内容
#一个进程可以拥有的VMA(虚拟内存区域)的数量,默认值为65536
vm.max_map_count=655360
修改完此配置文件以后,执行sysctl -p (重新加载)
sysctl -p
报错原因,是以root用户启动的

然后换到es用户进行启动,报错如下;

原来是这样,以root用户启动时,会生成一些文件信息,这些文件信息的用户、用户组还是root的
解决方案:在执行一下这个命令即可
chown -R es:es /usr/local/soft/es/es-cluster/
然后重启启动,正常启动了
cluster.name: cluster-es
#节点名称,每个节点的名称不能重复
node.name: node-1
#IP地址,每个节点的地址不能重复
network.host: 192.168.15.100
node.master: true
node.data: true
http.port: 9200
transport.port: 9300
#head 插件需要这里打开这两个配置
http.cors.enabled: true
http.cors.allow-origin: "*"
http.max_content_length: 200mb
#es7之后新增的配置,初始化一个新的集群时需要此配置来选举master
cluster.initial_master_nodes: ["node-1"]
#es7之后新增的配置,节点发现
discovery.seed_hosts: ["192.168.15.100:9300", "192.168.15.101:9300","192.168.15.102:9300"]
gateway.recover_after_nodes: 2
network.tcp.keep_alive: true
network.tcp.no_delay: true
transport.tcp.compress: true
#集群内同时启动的数据任务个数,默认是2个
cluster.routing.allocation.cluster_concurrent_rebalance: 16
#添加或删除节点及负载均衡时并发恢复的线程个数,默认是4个
cluster.routing.allocation.node_concurrent_recoveries: 16
#初始化数据恢复时,并发恢复线程的人数,默认是4个
cluster.routing.allocation.node_initial_primaries_recoveries: 16
遇到的问题,在浏览器中输入 IP地址:9200,访问不到
解决方案:关闭防火墙
systemctl stop firewalld.service
查看防火墙的状态的命令
systemctl status firewalld.service
集群可以正常启动了,但是用postman请求时,报错如下;

解决的方法,在这个路径下,将data目录下的都删除,然后重启即可

var foo = 'bar';
对比关系型数据库,创建索引就相当于创建数据库

当我们再次执行这个put命令时,提示如下;不能重复创建

查看索引用GET请求,可以看到这个索引的相关信息

查看所有的索引
http://192.168.15.100:9200/_cat/indices?v

删除索引
删除索引用DELETE请求方式

删除完成我们再去查看

创建文档就相当于向数据库中添加数据
直接执行这个命令报错
http://192.168.15.100:9200/shopping/_doc

需要在body中添加数据,以JSON格式来及逆行存储
{
"title":"小米手机",
"category":"小米",
"images":"https:xiaomi.com",
"prices":"2999"
}

我们再用PUT请求再尝试一次

自定义存储的id,这个不推荐把

根据id进行查询

全量查询

完全覆盖(幂等性)

局部修改

再去查看

在请求地址中拼接参数,不可取,因为中文的话可能存在问题
http://192.168.15.100:9200/shopping/_search?q=category:小米

在body中加上下面的参数,推荐这种方式
{
"query":{
"match":{
"category":"小米"
}
}
}

全量查询,只需要将match改成match_all即可

分页查询
{
"query":{
"match_all":{
}
},
"from":0, 起始的位置
"size":1 每页的数据条数
}

条件筛选,拼接上source,在source中加上要筛选的字段即可

{
"query":{
"match_all":{
}
},
"from":0,
"size":1,
"_source":[
"title"
],
"sort":{
"prices":{
"order":"desc"
}
}
}
{
"query":{
"bool":{
"must":[
{
"match":{
"category":"小米"
}
}
]
}
}
}

拼接多个条件时,match多写几个就可以了
{
"query":{
"bool":{
"must":[
{
"match":{
"category":"小米"
}
},
{
"match":{
"prices":"2999"
}
}
]
}
}
}

或者的方式,类似于or,请求体中加上should关键字即可
{
"query":{
"bool":{
"should":[
{
"match":{
"category":"小米"
}
},
{
"match":{
"category":"华为"
}
}
]
}
}
}

范围查询,加上filter关键字
{
"query":{
"bool":{
"should":[
{
"match":{
"category":"小米"
}
},
{
"match":{
"category":"华为"
}
}
],
"filter":{
"range":{
"prices":{
"gt":5000
}
}
}
}
}
}

只用一个“米”字也能查询出来,分词器的作用


精确查询

高亮显示
{
"query":{
"match_phrase":{
"category":"小米"
}
},
"highlight":{
"fields":{
"category":{}
}
}
}

分组聚合
{
"aggs":{
"price_group":{ //名称可以随意起
"terms":{
"field":"category"
}
}
}
}

没有索引的,不支持查询(注意)
{
"properties":{
"name":{
"type":"text", //可以分词
"index":true //走索引
},
"sex":{
"type":"keyword", //不可以分词
"index":true
},
"phone":{
"type":"keyword",
"index":false //不走索引
}
}
}
首先,先在pom文件中引入下面的依赖,我的ES的版本是7.6.1
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<version>7.6.1</version>
</dependency>
<!-- elasticsearch 的客户端 -->
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>elasticsearch-rest-high-level-client</artifactId>
<version>7.6.1</version>
</dependency>
<!-- elasticsearch 依赖 2.x 的 log4j -->
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-api</artifactId>
<version>2.8.2</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
<version>2.8.2</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.9.9</version>
</dependency>
<!-- junit 单元测试 -->
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
</dependency>
下面是创建客户端的样例代码
package com.ruoyi.web.controller.es;
import org.apache.http.HttpHost;
import org.elasticsearch.client.RestClient;
import org.elasticsearch.client.RestHighLevelClient;
import java.io.IOException;
/**
* 以客户端的形式访问es
*/
public class EsTest_Client {
public static void main(String[] args) throws IOException {
//创建es的客户端
RestHighLevelClient es_client = new RestHighLevelClient(
RestClient.builder(new HttpHost("172.16.94.7",9200))
);
//关闭ES的客户端
es_client.close();
}
}
//创建索引
public static void create_index() throws IOException {
//创建es的客户端
RestHighLevelClient es_client = new RestHighLevelClient(
RestClient.builder(new HttpHost("172.16.94.7",9200))
);
//创建索引
CreateIndexRequest request = new CreateIndexRequest("test_xige");
//
CreateIndexResponse response = es_client.indices().create(request, RequestOptions.DEFAULT);
//相应状态
boolean acknowledged = response.isAcknowledged();
//打印一下相应状态
System.out.println("打印相应状态" + acknowledged) ;
//关闭ES的客户端
es_client.close();
}

控制台打印成TRUE,说明我们创建索引成功了,然后用postman或者kibana去查看都可以了

public static void query_index() throws IOException {
//创建es的客户端
RestHighLevelClient es_client = new RestHighLevelClient(
RestClient.builder(new HttpHost("172.16.94.7",9200))
);
//查看索引
GetIndexRequest request = new GetIndexRequest("test_xige");
//
GetIndexResponse response = es_client.indices().get(request, RequestOptions.DEFAULT);
//相应状态
Map<String, List<AliasMetaData>> aliases = response.getAliases();
Map<String, MappingMetaData> mappings = response.getMappings();
Map<String, Settings> settings = response.getSettings();
//打印一下相应状态
System.out.println("打印索引的--别名:---" + aliases) ;
System.out.println("打印索引的--映射:---" + mappings) ;
System.out.println("打印索引的--设置:---" + settings) ;
//关闭ES的客户端
es_client.close();
}

删除索引
//删除索引
public static void delete_index() throws IOException {
//创建es的客户端
RestHighLevelClient es_client = new RestHighLevelClient(
RestClient.builder(new HttpHost("172.16.94.7",9200))
);
//查看索引
DeleteIndexRequest request = new DeleteIndexRequest("test_xige");
//
AcknowledgedResponse response = es_client.indices().delete(request, RequestOptions.DEFAULT);
//相应状态
//打印一下相应状态
System.out.println("删除状态:---" + response) ;
//关闭ES的客户端
es_client.close();
}

//向索引中添加数据
public static void insert_data() throws IOException {
//创建es的客户端
RestHighLevelClient es_client = new RestHighLevelClient(
RestClient.builder(new HttpHost("172.16.94.7",9200))
);
IndexRequest resuest = new IndexRequest();
resuest.index("xige").id("100001");
XiGe xiGe = new XiGe();
xiGe.setName("吴占喜");
xiGe.setAge("30");
xiGe.setSex("男");
//向ES中添加数据,格式必须是JSON格式的
ObjectMapper mapper = new ObjectMapper();
String json = mapper.writeValueAsString(xiGe);
//
resuest.source(json, XContentType.JSON);
//
IndexResponse response = es_client.index(resuest, RequestOptions.DEFAULT);
//
String id = response.getId();
String index = response.getIndex();
DocWriteResponse.Result result = response.getResult();
System.out.println("ID:" + id);
System.out.println("index:" + index);
System.out.println("result:" + result);
//关闭ES的客户端
es_client.close();
}
看控制台的打印结果

然后去kibana中去查看,已经添加成功了

//查询
public static void query_data() throws IOException {
//创建es的客户端
RestHighLevelClient es_client = new RestHighLevelClient(
RestClient.builder(new HttpHost("172.16.94.7",9200))
);
//
GetRequest request = new GetRequest();
//
request.index("xige").id("100001");
//
GetResponse response = es_client.get(request, RequestOptions.DEFAULT);
//
String index = response.getIndex();
String sourceAsString = response.getSourceAsString();
//
System.out.println("索引:"+index);
System.out.println("所有的字段:" + sourceAsString);
//关闭ES的客户端
es_client.close();
}
查看控制台的打印

//批量新增
public static void batch_insert_data() throws IOException {
//创建es的客户端
RestHighLevelClient es_client = new RestHighLevelClient(
RestClient.builder(new HttpHost("172.16.94.7",9200))
);
//
BulkRequest request = new BulkRequest();
//
request.add(new IndexRequest().index("xige").id("1001").source(XContentType.JSON,"name","张三"));
request.add(new IndexRequest().index("xige").id("1002").source(XContentType.JSON,"name","李四"));
request.add(new IndexRequest().index("xige").id("1003").source(XContentType.JSON,"name","王五"));
//
BulkResponse reponse = es_client.bulk(request, RequestOptions.DEFAULT);
//
System.out.println(reponse.getItems());
System.out.println(reponse.getTook());
//关闭ES的客户端
es_client.close();
}
控制台打印

感觉也不是太方便,期待后面有更好的批量新增的方法吧
去页面上查看,已经成功添加进来了

//高级查询、全量查询
public static void query_all_data() throws IOException {
//创建es的客户端
RestHighLevelClient es_client = new RestHighLevelClient(
RestClient.builder(new HttpHost("172.16.94.7",9200))
);
//
SearchRequest request = new SearchRequest();
//
request.indices("xige");
//
SearchSourceBuilder query = new SearchSourceBuilder().query(QueryBuilders.matchAllQuery());
//
request.source(query);
//
SearchResponse response = es_client.search(request, RequestOptions.DEFAULT);
//
SearchHits hits = response.getHits();
System.out.println("命中的条数" + hits.getTotalHits());
//关闭ES的客户端
es_client.close();
}

条件查询
//高级查询、条件查询
public static void query_term_data() throws IOException {
//创建es的客户端
RestHighLevelClient es_client = new RestHighLevelClient(
RestClient.builder(new HttpHost("172.16.94.7",9200))
);
//
SearchRequest request = new SearchRequest();
//
request.indices("xige");
//
SearchSourceBuilder query = new SearchSourceBuilder().query(QueryBuilders.termQuery("name","王"));
//
request.source(query);
//
SearchResponse response = es_client.search(request, RequestOptions.DEFAULT);
//
SearchHits hits = response.getHits();
System.out.println("命中的条数" + hits.getTotalHits());
//关闭ES的客户端
es_client.close();
}
此时发现了一个有意思的现象哈,我用王五查询是查询不到的,可能跟汉字的分词有关系,用王或者五都能查询出来
分页查询
public static void query_fenye_data() throws IOException {
//创建es的客户端
RestHighLevelClient es_client = new RestHighLevelClient(
RestClient.builder(new HttpHost("172.16.94.7",9200))
);
//
SearchRequest request = new SearchRequest();
//
request.indices("xige");
//
SearchSourceBuilder query = new SearchSourceBuilder().query(QueryBuilders.matchAllQuery());
//设置分页的起始位置和每页要展示的条数
query.from(0);
query.size(2);
//
request.source(query);
//
SearchResponse response = es_client.search(request, RequestOptions.DEFAULT);
//
SearchHits hits = response.getHits();
System.out.println("命中的条数" + hits.getTotalHits());
//关闭ES的客户端
es_client.close();
}
排序
public static void query_fenye_data() throws IOException {
//创建es的客户端
RestHighLevelClient es_client = new RestHighLevelClient(
RestClient.builder(new HttpHost("172.16.94.7",9200))
);
//
SearchRequest request = new SearchRequest();
//
request.indices("xige");
//
SearchSourceBuilder query = new SearchSourceBuilder().query(QueryBuilders.matchAllQuery());
//设置分页的起始位置和每页要展示的条数
query.sort("name", SortOrder.ASC);
//
request.source(query);
//
SearchResponse response = es_client.search(request, RequestOptions.DEFAULT);
//
SearchHits hits = response.getHits();
System.out.println("命中的条数" + hits.getTotalHits());
//关闭ES的客户端
es_client.close();
}
//高级查询、条件查询
public static void query_combine_data() throws IOException {
//创建es的客户端
RestHighLevelClient es_client = new RestHighLevelClient(
RestClient.builder(new HttpHost("172.16.94.7",9200))
);
//
SearchRequest request = new SearchRequest();
//
request.indices("xige");
//
SearchSourceBuilder build = new SearchSourceBuilder();
//
BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery();
//
boolQueryBuilder.must(QueryBuilders.matchQuery("name","王"));
//
build.query(boolQueryBuilder);
//
request.source(build);
SearchSourceBuilder query = new SearchSourceBuilder().query(QueryBuilders.matchAllQuery());
//设置分页的起始位置和每页要展示的条数
//
request.source(query);
//
SearchResponse response = es_client.search(request, RequestOptions.DEFAULT);
//
SearchHits hits = response.getHits();
System.out.println("命中的条数" + hits.getTotalHits());
//关闭ES的客户端
es_client.close();
}
1.报错java路径找不到,注意:这里面安装es和jdk不要在/root路径下
2.报错max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]
我发现我已经配置了这个限制了

然后我执行下面的命令,依然报错如上
ulimit -Hn: 是max number of open file descriptors的hard限制
ulimit -Sn: 是max number of open file descriptors的soft限制
最后找了相关文档,发现,改完需要重新登录才能生效,或者切换用户,至此,问题解决喽
启动logstash报错如下;
2022-10-17T15:21:48,805][FATAL][logstash.runner ] An unexpected error occurred! {:error=>#<ArgumentError: Path "/usr/local/soft/es/logstash/logstash-7.6.1/data/queue" must be a writable directory. It is not writable.>, :backtrace=>["/usr/local/soft/es/logstash/logstash-7.6.1/logstash-core/lib/logstash/settings.rb:489:in `validate'", "/usr/local/soft/es/logstash/logstash-7.6.1/logstash-core/lib/logstash/settings.rb:271:in `validate_value'", "/usr/local/soft/es/logstash/logstash-7.6.1/logstash-core/lib/logstash/settings.rb:182:in `block in validate_all'", "org/jruby/RubyHash.java:1428:in `each'", "/usr/local/soft/es/logstash/logstash-7.6.1/logstash-core/lib/logstash/settings.rb:181:in `validate_all'", "/usr/local/soft/es/logstash/logstash-7.6.1/logstash-core/lib/logstash/runner.rb:284:in `execute'", "/usr/local/soft/es/logstash/logstash-7.6.1/vendor/bundle/jruby/2.5.0/gems/clamp-0.6.5/lib/clamp/command.rb:67:in `run'", "/usr/local/soft/es/logstash/logstash-7.6.1/logstash-core/lib/logstash/runner.rb:242:in `run'", "/usr/local/soft/es/logstash/logstash-7.6.1/vendor/bundle/jruby/2.5.0/gems/clamp-0.6.5/lib/clamp/command.rb:132:in `run'", "/usr/local/soft/es/logstash/logstash-7.6.1/lib/bootstrap/environment.rb:73:in `'" ]}
[2022-10-17T15:21:48,898][ERROR][org.logstash.Logstash ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
报错原因是/usr/local/soft/es/logstash/logstash-7.6.1/data/queue下的用户用户组是root了,手动的改一下权限就解决了
解决方案:进入到/usr/local/soft/es/ 这个目录下,执行命令
chown -R es:es
kibana的界面显示如下;在日志目录下没发现问题,/var/log/message

解决方案:重启kibana,现在还没有找到报错的合理的解决方法