Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
528 views
in Technique[技术] by (71.8m points)

python - Elastic search not giving data with big number for page size

Size of data to get: 20,000 approx

Issue: searching Elastic Search indexed data using below command in python

but not getting any results back.

from pyelasticsearch import ElasticSearch
es_repo = ElasticSearch(settings.ES_INDEX_URL)
search_results = es_repo.search(
            query, index=advertiser_name, es_from=_from, size=_size)

If I give size less than or equal to 10,000 it works fine but not with 20,000 Please help me find an optimal solution to this.

PS: On digging deeper into ES found this message error:

Result window is too large, from + size must be less than or equal to: [10000] but was [19999]. See the scrolling API for a more efficient way to request large data sets.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

for real time use the best solution is to use the search after query . You need only a date field, and another field that uniquely identify a doc - it's enough a _id field or an _uid field. Try something like this, in my example I would like to extract all the documents that belongs to a single user - in my example the user field has a keyword datatype:

from elasticsearch import Elasticsearch


es = Elasticsearch()
es_index = "your_index_name"
documento = "your_doc_type"

user = "Francesco Totti"

body2 = {
        "query": {
        "term" : { "user" : user } 
            }
        }

res = es.count(index=es_index, doc_type=documento, body= body2)
size = res['count']


body = { "size": 10,
            "query": {
                "term" : {
                    "user" : user
                }
            },
            "sort": [
                {"date": "asc"},
                {"_uid": "desc"}
            ]
        }

result = es.search(index=es_index, doc_type=documento, body= body)
bookmark = [result['hits']['hits'][-1]['sort'][0], str(result['hits']['hits'][-1]['sort'][1]) ]

body1 = {"size": 10,
            "query": {
                "term" : {
                    "user" : user
                }
            },
            "search_after": bookmark,
            "sort": [
                {"date": "asc"},
                {"_uid": "desc"}
            ]
        }




while len(result['hits']['hits']) < size:
    res =es.search(index=es_index, doc_type=documento, body= body1)
    for el in res['hits']['hits']:
        result['hits']['hits'].append( el )
    bookmark = [res['hits']['hits'][-1]['sort'][0], str(result['hits']['hits'][-1]['sort'][1]) ]
    body1 = {"size": 10,
            "query": {
                "term" : {
                    "user" : user
                }
            },
            "search_after": bookmark,
            "sort": [
                {"date": "asc"},
                {"_uid": "desc"}
            ]
        }

Then you will find all the doc appended to the result var

If you would like to use scroll query - doc here:

from elasticsearch import Elasticsearch, helpers

es = Elasticsearch()
es_index = "your_index_name"
documento = "your_doc_type"

user = "Francesco Totti"

body = {
        "query": {
        "term" : { "user" : user } 
             }
        }

res = helpers.scan(
                client = es,
                scroll = '2m',
                query = body, 
                index = es_index)

for i in res:
    print(i)

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...