• Docker安装Elasticsearch与案例


    1 ElasticSearch相关配置

    1.1 下载镜像文件

    docker pull elasticsearch:7.6.2
    docker pull kibana:7.6.2
    

    1.2 创建实例

    mkdir -p /mydata/elasticsearch/config
    mkdir -p /mydata/elasticsearch/data
    echo "http.host: 0.0.0.0">>/mydata/elasticsearch/config/elasticsearch.yml
    

    1.3 安装

    docker run --name elasticsearch -p 9200:9200 -p 9300:9300 \
    -e "discovery.type=single-node" \
    -e ES_JAVA_OPTS="-Xms64m -Xmx128m" \
    -v /mydata/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
    -v /mydata/elasticsearch/data:/usr/share/elasticsearch/data \
    -v /mydata/elasticsearch/plugins:/usr/share/elasticsearch/plugins \
    -d elasticsearch:7.6.2
    

    1.4 错误日志排查

    docker logs elasticsearch
    

    查看到错误日志内容,主要是没有目录权限

    Exception in thread "main" SettingsException[Failed to load settings from [elasticsearch.yml]]; nested: ParsingException[Failed to parse object: expecting token of type [START_OBJECT] but found [VALUE_STRING]];
    	at org.elasticsearch.common.settings.Settings$Builder.loadFromStream(Settings.java:1097)
    	at org.elasticsearch.common.settings.Settings$Builder.loadFromPath(Settings.java:1070)
    	at org.elasticsearch.node.InternalSettingsPreparer.prepareEnvironment(InternalSettingsPreparer.java:83)
    	at org.elasticsearch.cli.EnvironmentAwareCommand.createEnv(EnvironmentAwareCommand.java:100)
    	at org.elasticsearch.cli.EnvironmentAwareCommand.createEnv(EnvironmentAwareCommand.java:91)
    	at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86)
    	at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:125)
    	at org.elasticsearch.cli.Command.main(Command.java:90)
    	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:126)
    	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92)
    Caused by: ParsingException[Failed to parse object: expecting token of type [START_OBJECT] but found [VALUE_STRING]]
    	at org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken(XContentParserUtils.java:78)
    	at org.elasticsearch.common.settings.Settings.fromXContent(Settings.java:617)
    	at org.elasticsearch.common.settings.Settings.access$400(Settings.java:82)
    	at org.elasticsearch.common.settings.Settings$Builder.loadFromStream(Settings.java:1093)
    	... 9 more
    OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
    2024-05-06 05:23:09,899 main ERROR No Log4j 2 configuration file found. Using default configuration (logging only errors to the console), or user programmatically provided configurations. Set system property 'log4j2.debug' to show Log4j 2 internal initialization logging. See https://logging.apache.org/log4j/2.x/manual/configuration.html for instructions on how to configure Log4j 2
    Exception in thread "main" SettingsException[Failed to load settings from [elasticsearch.yml]]; nested: ParsingException[Failed to parse object: expecting token of type [START_OBJECT] but found [VALUE_STRING]];
    	at org.elasticsearch.common.settings.Settings$Builder.loadFromStream(Settings.java:1097)
    	at org.elasticsearch.common.settings.Settings$Builder.loadFromPath(Settings.java:1070)
    	at org.elasticsearch.node.InternalSettingsPreparer.prepareEnvironment(InternalSettingsPreparer.java:83)
    	at org.elasticsearch.cli.EnvironmentAwareCommand.createEnv(EnvironmentAwareCommand.java:100)
    	at org.elasticsearch.cli.EnvironmentAwareCommand.createEnv(EnvironmentAwareCommand.java:91)
    	at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86)
    	at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:125)
    	at org.elasticsearch.cli.Command.main(Command.java:90)
    	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:126)
    	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92)
    Caused by: ParsingException[Failed to parse object: expecting token of type [START_OBJECT] but found [VALUE_STRING]]
    	at org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken(XContentParserUtils.java:78)
    	at org.elasticsearch.common.settings.Settings.fromXContent(Settings.java:617)
    	at org.elasticsearch.common.settings.Settings.access$400(Settings.java:82)
    	at org.elasticsearch.common.settings.Settings$Builder.loadFromStream(Settings.java:1093)
    	... 9 more
    

    1.5 放开权限,在elasticsearch目录下之心

    chmod -R 777 /mydata/elasticsearch/
    

    1.6 主机访问 192.168.xxx.xxx:9200

    访问成功
    {
      "name" : "49efa4a3b070",
      "cluster_name" : "elasticsearch",
      "cluster_uuid" : "RXxcWAwqQiyksqBNQOSwNA",
      "version" : {
        "number" : "7.6.2",
        "build_flavor" : "default",
        "build_type" : "docker",
        "build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f",
        "build_date" : "2020-03-26T06:34:37.794943Z",
        "build_snapshot" : false,
        "lucene_version" : "8.4.0",
        "minimum_wire_compatibility_version" : "6.8.0",
        "minimum_index_compatibility_version" : "6.0.0-beta1"
      },
      "tagline" : "You Know, for Search"
    }
    

    2 Kibana安装

    docker run --name kibana -e ELASTICSEARCH_URL=http://192.168.xxx.xxx:9200 -p 5601:5601 \
    
    docker run -d --name kibana -p 5601:5601 \
    --link elasticsearch:elasticsearch \
    kibana:7.6.2
    
    -d kibana:7.6.2
    

    2.1 访问 :http://192.168.233.128:5601/

    2.2 设置启动docker时自动启动

    docker update nginx --restart=always
    

    2.3 初步检索初步检索

    点击小扳手

    在这里插入图片描述

    1._cat
    GET /_cat/nodes:查看所有节点
    GET /_cat/health:查看es健康状况
    GET /_cat/master:查看主节点
    GET /_cat/indices:查看所有索引 whow databases;
    

    2.4 索引一个文档(保存)索引一个文档(保存)

    在customer索引下的external类型下保存1号数据为

    PUT customer/external/1
    {
    "name":"John Doe"
    }
    

    PUT和Post都可以
    POST新增。如果不指定id,会自动生成id.指定id就会修改这个数据,并新增版本号
    PUT可以新增可以修改。PUT必须指定id;由于PUT需要指定Id,一般都用来做修改操作,不指定id会报错。

    2.5 查询文档

    GET customer/external/1
    响应
    {
        "_index": "customer",//哪个索引
        "_type": "external",//哪个类型
        "_id": "1",//记录id
        "_version": 1,//版本号
        "_seq_no": 0,//并发控制字段,每次更新就会+1,用来做乐观锁
        "_primary_term": 1,//同上主分片重新分配,如重启,就会变化
        "found": true,
        "_source": { //内容
            "name": "John Doe"
        }
    }
    

    更新携带 ?if_seq_no=0&if_primary_term=1 可以作为高并发下的控制(相当于乐观锁)
    例:192.168.xxx.xxx:9200/customer/external/1?if_seq_no=0&if_primary_term=1

    2.6 更新文档

    POST customer/external/1/_update
    {
    "doc":{
    	"name":"John Doew"
    }
    }
    或者
    POST customer/external/1
    {
    "name":"John Doe2"
    }
    或者
    PUT customer/external/1
    {
    "name":"John Doe"
    }
    

    不同:POST操作会对比源文档数据,如果相同不会有什么操作,文档version不增加
    PUT操作总会将数据重新保存并增加version版本;
    带_update对比元数据如果一样就不进行任何操作。
    看场景:
    对于大并发更新,不带update;
    对于大并发查询偶尔更新,带update;对比更新,重新计算分配规则。
    更新同时增加属性
    POST customer/external/1/_update
    {
    “doc”:{“name”:“Jane Doe”,“age”:20}
    }
    PUT和POST不带_update也可以

    2.7 删除文档&索引

    DELETE customer/external/1
    DELETE customer
    

    2.8 bulk批量API

    POST customer/external/_bulk
    {"index":{"_id":"1"}}
    {"name":"John Doe"}
    {"index":{"_id":"2"}}
    {"name":"Jane Doe"}
    

    语法格式:
    {action:{metadata}}\n
    {request body}\n
    {action:{metadata}}\n
    {request body}\n
    复杂实例

    POST /_bulk
    { "delete": {"_index":"website","_type":"blog","_id":"123"}}
    { "create": {"_index":"website","_type":"blog","_id":"123"}}
    { "title":"My first blog post"}
    { "index":{"_index":"website","_type":"blog"}}
    { "title":"My second blog post"}
    { "update":{"_index":"website","_type":"blog","_id":"123"}}
    { "doc": {"title":"My updated blog post"}}
    

    2.9 官方案例 https://github.com/elastic/elasticsearch/blob/7.5/docs/src/test/resources/accounts.json

    POST /bank/account/_bulk
    {"index":{"_id":"1"}}
    {"account_number":1,"balance":39225,"firstname":"Amber","lastname":"Duke","age":32,"gender":"M","address":"880 Holmes Lane","employer":"Pyrami","email":"amberduke@pyrami.com","city":"Brogan","state":"IL"}
    ....
    

    2.10 测试官方数据

    GET bank/_search?q=*&sort=account_number:asc
    
    # https://www.elastic.co/guide/en/elasticsearch/reference/7.6/getting-started-search.html
    
    GET /bank/_search
    {
      "query": {
        "match_all": {}
      },
      "sort": [
        {
          "account_number": "asc"
        },
        {
          "balance": "desc"
        }
      ]
    }
    
    GET /bank/_search
    {
      "query": {
        "match_all": {}
      },
      "sort": [
        {
          "balance": "desc"
        }
      ],
      "from": 0,
      "size": 5,
      "_source": ["balance","firstname"]
    }
    
    ## match 全文检索按照评分进行排序,分词
    GET bank/_search
    {
      "query": {
        "match": {
          "account_number": "20"
        }
      }
    }
    
    GET bank/_search
    {
      "query": {
        "match": {
          "balance": "16418"
        }
      }
    }
    
    GET bank/_search
    {
      "query": {
        "match": {
          "address": "Kings"
        }
      }
    }
    
    ## match_phrase 短句匹配,不分词进行检索
    GET bank/_search
    {
      "query": {
        "match_phrase": {
          "address": "mill lane"
        }
      }
    }
    
    ## multi_match 多字段匹配 state或者address包含mill
    GET bank/_search
    {
      "query": {
        "multi_match": {
          "query": "mill",
          "fields": ["state","address"]
        }
      }
    }
    
    ## 复合查询 should(可能有)
    GET /bank/_search
    {
      "query": {
        "bool": {
          "must": [
            { "match": { "age": "40" } },
            { "match": { "address": "Ide" } }
          ],
          "must_not": [
            { "match": { "state": "ID" } }
          ],
          "should": [
            {"match": {
              "state": "AI"
            }}
          ]
        }
      }
    }
    
    ## filter 结果过滤:并不是所有的查询都需要产生分数,特别是那些仅用于filtering过滤的文档。为了不计算分数Elasticearch会自动检查场景并且优化查询的执行
    GET /bank/_search
    {
      "query": {
        "bool": {
          "must": { "match_all": {} },
          "filter": {
            "range": {
              "balance": {
                "gte": 20000,
                "lte": 30000
              }
            }
          }
        }
      }
    }
    
    ## term 和match一样。匹配某个属性的值。全文检索字段用match,其他非text字段匹配用term,推荐找精确字段使用term
    GET bank/_search
    {
      "query": {
        "term": {
          "age": "28"
        }
      }
    }
    
    ## keyword 当成精确值进行检索
    GET bank/_search
    {
      "query": {
        "match": {
          "address.keyword": "789 Madison"
        }
      }
    }
    ## match_phrase短语匹配,只要包含就行,keyword精确匹配必须全部匹配
    ##查询精确值字段使用term,文本字段使用match
    GET bank/_search
    {
      "query": {
        "match_phrase": {
          "address": "789 Madison"
        }
      }
    }
    
    ## aggregations 执行聚合例:搜索address中包含mill的所有人的年龄分布及平均年龄,但不显示这些人的详情
    ## size : 0 意思为只看聚合结果
    GET bank/_search
    {
      "query": {
        "match": {
          "address": "mill"
        }
      },
      "aggs": {
        "ageAgg": {
          "terms": {
            "field": "age",
            "size": 10
          }
        },
        "ageAvg":{
          "avg": {
            "field": "age"
          }
        },
        "balanceAvg":{
          "avg": {
            "field": "balance"
          }
        }
      },
      "size": 0
    }
    ## 按照年龄聚合,并且请求这些年龄段的这些人的平均薪资
    GET bank/_search
    {
      "query": {
        "match_all": {
          
        }
      },
        "aggs":{
          "ageAgg":{
            "terms": {
              "field": "age",
              "size": 100
            },
            "aggs": {
              "ageAvg": {
                "avg": {
                  "field": "balance"
                }
              }
            }
          }
        }
    }
    
    ## 查出所有年龄分布,并且这些年龄段中的M的薪资和F的平均薪资以及这个年龄段的总体平均薪资
    GET bank/_search
    {
      "query": {
        "match_all": {}
      },
      "aggs": {
        "ageAgg": {
          "terms": {
            "field": "age",
            "size": 100
          },
          "aggs": {
            "genderAgg": {
              "terms": {
                "field": "gender.keyword",
                "size": 10
              },
              "aggs":{
                "balanceAvg": {
                  "avg": {
                    "field": "balance"
                  }
                }
              }
            },
            "ageBalanceAvg": {
              "avg": {
                "field": "balance"
              }
            }
          }
        }
      }
    }
    
    GET /bank/_mapping
    
    # 新增映射
    PUT /my_index
    {
      "mappings": {
        "properties": {
          "age":{"type": "integer"},
          "email":{"type": "keyword"},
          "name":{"type": "text"}
        }
      }
    }
    #添加新的字段映射,不能用于更新
    PUT /my_index/_mapping
    {
      "properties":{
        "employee-id":{
          "type": "keyword",
          "index": false
        }
      }
    }
    #修改一个已存在的映射(只能用数据迁移,创建一个新的索引并打数据迁移过去)
    
    GET /bank/_mapping
    
    PUT /newbank
    {
      "mappings": {
        "properties": {
          "account_number": {
            "type": "long"
          },
          "address": {
            "type": "text"
          },
          "age": {
            "type": "integer"
          },
          "balance": {
            "type": "long"
          },
          "city": {
            "type": "keyword"
          },
          "email": {
            "type": "keyword"
          },
          "employer": {
            "type": "keyword"
          },
          "firstname": {
            "type": "text"
          },
          "gender": {
            "type": "keyword"
          },
          "lastname": {
            "type": "text",
            "fields": {
              "keyword": {
                "type": "keyword",
                "ignore_above": 256
              }
            }
          },
          "state": {
            "type": "keyword"
          }
        }
      }
    }
    GET newbank/_mapping
    #数据迁移,先创建出new_twitter的正确映射。然后使用如下方式进行数据迁移
    #type 老版本需要指定数据类型,以后都不用type
    #source:目标索引 dest:新索引
    POST _reindex
    {
      "source": {
        "index": "bank",
        "type": "account"
      },
      "dest": {
        "index": "newbank"
      }
    }
    GET /newbank/_search
    #es原生分词器(不太好用,需要装上Ik分词器)
    POST _analyze
    {
      "analyzer": "standard",
      "text": "the 2 QUICK Brown-Foxes jumped over the lazy dog`s bone"
    }
    
    #ik智能分词器
    POST _analyze
    {
      "analyzer": "ik_smart",
      "text": "我是中国人"
    }
    # is_max_word 找到最大的单词组合
    POST _analyze
    {
      "analyzer": "ik_max_word",
      "text": "接地电流"
    }
    
    GET users/_search
    
    #商城的商品index
    PUT product
    {
      "mappings": {
        "properties": {
          "skuId": {
            "type": "long"
          },
          "spuId": {
            "type": "keyword"
          },
          "skuTitle": {
            "type": "text",
            "analyzer": "ik_smart"
          },
          "skuPrice": {
            "type": "keyword"
          },
          "skuImg": {
            "type": "keyword",
            "index": false,
            "doc_values": false
          },
          "saleCount": {
            "type": "long"
          },
          "hasStock": {
            "type": "boolean"
          },
          "hotScore": {
            "type": "long"
          },
          "brandId": {
            "type": "long"
          },
          "catalogId": {
            "type": "long"
          },
          "brandName": {
            "type": "keyword",
            "index": false,
            "doc_values": false
          },
          "brandImg": {
            "type": "keyword",
            "index": false,
            "doc_values": false
          },
          "catalogName": {
            "type": "keyword",
            "index": false,
            "doc_values": false
          },
          "attrs": {
            "type": "nested",
            "properties": {
              "attrId": {
                "type": "long"
              },
              "attrName": {
                "type": "keyword",
                "index": false,
                "doc_values": false
              },
              "attrValue": {
                "type": "keyword"
              }
            }
          }
        }
      }
    }
    

    2.11 java RestHighLevelClient 案例

    pom.xml

            <dependency>
                <groupId>org.elasticsearch.client</groupId>
                <artifactId>elasticsearch-rest-high-level-client</artifactId>
                <version>7.6.2</version>
            </dependency>
    

    ElasticSearchConfig

    package com.search.config;
    
    import org.apache.http.HttpHost;
    import org.elasticsearch.client.RequestOptions;
    import org.elasticsearch.client.RestClient;
    import org.elasticsearch.client.RestHighLevelClient;
    import org.springframework.context.annotation.Bean;
    import org.springframework.context.annotation.Configuration;
    
    
    @Configuration
    public class ElasticSearchConfig {
    
        // @Bean
        // public RestHighLevelClient esRestClient(){
        //     RestHighLevelClient client = new RestHighLevelClient(
        //             RestClient.builder(new HttpHost("192.168.137.14", 9200, "http")));
        //     return  client;
        // }
    
        public static final RequestOptions COMMON_OPTIONS;
        static {
            RequestOptions.Builder builder = RequestOptions.DEFAULT.toBuilder();
            // builder.addHeader("Authorization", "Bearer " + TOKEN);
            // builder.setHttpAsyncResponseConsumerFactory(
            //         new HttpAsyncResponseConsumerFactory
            //                 .HeapBufferedResponseConsumerFactory(30 * 1024 * 1024 * 1024));
            COMMON_OPTIONS = builder.build();
        }
    
        @Bean
        public RestHighLevelClient esRestClient(){
            RestHighLevelClient client = new RestHighLevelClient(
                    RestClient.builder(new HttpHost("192.168.77.130", 9200, "http")));
            return  client;
        }
    
    }
    
    

    SearchApplicationTests

    package com.search;
    
    import com.alibaba.fastjson.JSON;
    import com.search.config.ElasticSearchConfig;
    import lombok.Data;
    import lombok.Getter;
    import lombok.Setter;
    import lombok.ToString;
    import org.elasticsearch.action.index.IndexRequest;
    import org.elasticsearch.action.index.IndexResponse;
    import org.elasticsearch.action.search.SearchRequest;
    import org.elasticsearch.action.search.SearchResponse;
    import org.elasticsearch.client.RequestOptions;
    import org.elasticsearch.client.RestHighLevelClient;
    import org.elasticsearch.common.xcontent.XContentType;
    import org.elasticsearch.index.query.QueryBuilder;
    import org.elasticsearch.index.query.QueryBuilders;
    import org.elasticsearch.search.SearchHit;
    import org.elasticsearch.search.SearchHits;
    import org.elasticsearch.search.aggregations.AggregationBuilders;
    import org.elasticsearch.search.aggregations.Aggregations;
    import org.elasticsearch.search.aggregations.bucket.terms.Terms;
    import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder;
    import org.elasticsearch.search.aggregations.metrics.Avg;
    import org.elasticsearch.search.aggregations.metrics.AvgAggregationBuilder;
    import org.elasticsearch.search.builder.SearchSourceBuilder;
    import org.junit.Test;
    import org.junit.runner.RunWith;
    import org.springframework.boot.test.context.SpringBootTest;
    import org.springframework.test.context.junit4.SpringRunner;
    
    import javax.annotation.Resource;
    import java.io.IOException;
    
    @RunWith(SpringRunner.class)
    @SpringBootTest
    public class SearchApplicationTests {
    
    
        @Resource
        private RestHighLevelClient client;
    
        @ToString
        @Data
        static class Account {
            private int account_number;
            private int balance;
            private String firstname;
            private String lastname;
            private int age;
            private String gender;
            private String address;
            private String employer;
            private String email;
            private String city;
            private String state;
        }
    
    
        /**
         * 复杂检索:在bank中搜索address中包含mill的所有人的年龄分布以及平均年龄,平均薪资
         * @throws IOException
         */
        @Test
        public void searchData() throws IOException {
            //1. 创建检索请求
            SearchRequest searchRequest = new SearchRequest();
    
            //1.1)指定索引
            searchRequest.indices("bank");
            //1.2)构造检索条件
            SearchSourceBuilder sourceBuilder = new SearchSourceBuilder();
            sourceBuilder.query(QueryBuilders.matchQuery("address", "Mill"));
    
    
            //1.2.1)按照年龄分布进行聚合
            TermsAggregationBuilder ageAgg = AggregationBuilders.terms("ageAgg").field("age").size(10);
            sourceBuilder.aggregation(ageAgg);
    
            //1.2.2)计算平均年龄
            AvgAggregationBuilder ageAvg = AggregationBuilders.avg("ageAvg").field("age");
            sourceBuilder.aggregation(ageAvg);
            //1.2.3)计算平均薪资
            AvgAggregationBuilder balanceAvg = AggregationBuilders.avg("balanceAvg").field("balance");
            sourceBuilder.aggregation(balanceAvg);
    
            System.out.println("检索条件:" + sourceBuilder);
            searchRequest.source(sourceBuilder);
            //2. 执行检索
            SearchResponse searchResponse = client.search(searchRequest, RequestOptions.DEFAULT);
            System.out.println("检索结果:" + searchResponse);
    
            //3. 将检索结果封装为Bean
            SearchHits hits = searchResponse.getHits();
            SearchHit[] searchHits = hits.getHits();
            for (SearchHit searchHit : searchHits) {
                String sourceAsString = searchHit.getSourceAsString();
                Account account = JSON.parseObject(sourceAsString, Account.class);
                System.out.println(account);
    
            }
    
            //4. 获取聚合信息
            Aggregations aggregations = searchResponse.getAggregations();
    
            Terms ageAgg1 = aggregations.get("ageAgg");
    
            for (Terms.Bucket bucket : ageAgg1.getBuckets()) {
                String keyAsString = bucket.getKeyAsString();
                System.out.println("年龄:" + keyAsString + " ==> " + bucket.getDocCount());
            }
            Avg ageAvg1 = aggregations.get("ageAvg");
            System.out.println("平均年龄:" + ageAvg1.getValue());
    
            Avg balanceAvg1 = aggregations.get("balanceAvg");
            System.out.println("平均薪资:" + balanceAvg1.getValue());
        }
    
        @Test
        public void searchState() throws IOException {
            //1. 创建检索请求
            SearchSourceBuilder sourceBuilder = new SearchSourceBuilder();
            //        sourceBuilder.query(QueryBuilders.termQuery("city", "Nicholson"));
            //        sourceBuilder.from(0);
            //        sourceBuilder.size(5);
            //        sourceBuilder.timeout(new TimeValue(60, TimeUnit.SECONDS));
            QueryBuilder matchQueryBuilder = QueryBuilders.matchQuery("state", "AK");
            //                .fuzziness(Fuzziness.AUTO)
            //                .prefixLength(3)
            //                .maxExpansions(10);
            sourceBuilder.query(matchQueryBuilder);
            SearchRequest searchRequest = new SearchRequest();
            searchRequest.indices("bank");
            searchRequest.source(sourceBuilder);
            //2. 执行检索
            SearchResponse searchResponse = client.search(searchRequest, RequestOptions.DEFAULT);
            System.out.println(searchResponse);
    
        }
    
        /**
         * 测试ES数据
         * 更新也可以
         */
        @Test
        public void indexData() throws IOException {
    
            IndexRequest indexRequest = new IndexRequest("users");
            indexRequest.id("1");   //数据的id
    
            // indexRequest.source("userName","zhangsan","age",18,"gender","男");
    
            User user = new User();
            user.setUserName("zhangsan");
            user.setAge(18);
            user.setGender("男");
    
            String jsonString = JSON.toJSONString(user);
            indexRequest.source(jsonString, XContentType.JSON);  //要保存的内容
    
            //执行操作
            IndexResponse index = client.index(indexRequest, GulimallElasticSearchConfig.COMMON_OPTIONS);
    
            //提取有用的响应数据
            System.out.println(index);
    
        }
    
        @Getter
        @Setter
        class User {
            private String userName;
            private String gender;
            private Integer age;
        }
    
        @Test
        public void contextLoads() {
    
            System.out.println(client);
    
        }
    }
    
    

    2.12 更多java案例请参考

    https://blog.csdn.net/zk13120778155/article/details/131432628

  • 相关阅读:
    C#:实现分枝绑定背包求解器算法(附完整源码)
    MATLAB----矩阵求逆的123!
    【1470. 重新排列数组】
    【LeetCode-中等】33.搜索旋转排序数组 - 二分法
    使用 tensorboard 常见的问题及解决办法
    python使用mongoDB
    大数据ClickHouse进阶(十二):ClickHouse的explain查询执行计划
    滑动窗口最大值(239)题解 难度:困难
    Flink kafka 数据汇不指定分区器导致的问题
    node.js安装及环境配置超详细教程【Windows系统安装包方式】
  • 原文地址:https://blog.csdn.net/zk13120778155/article/details/139285793