Introducing SOC Insights for BloxOne Threat Defense: Boost your SOC efficiency with AI-driven insights to eliminate manual work and accelerate investigation and response times. Read the blog announcement here.

API & Integration, DevOps,NetOps,SecOps

Reply

IncompleteRead during paginated request?

Techie
Posts: 5
5432     0

I'm getting errors like the ones below in sporadic cases, and am at a loss as to how to get around it. 

 

My code is generating paginated requests and processing the responses.  My initial thought was that I could put in something like https://stackoverflow.com/questions/44378849/bypassing-the-incompleteread-exception to retry a certain number of times when it encounters such an exception, but my guess is that since the request is paginated (i.e. implementing the fix at https://community.infoblox.com/t5/API-Integration/Specify-max-records-for-a-query/m-p/12781#M1608), that the retry attempts would fail because the _page_id value wouldn't be valid anymore.   Has anyone else seen this or have any thoughts?

 

Traceback (most recent call last):
  File "/usr/local/lib/python3.4/site-packages/urllib3/response.py", line 331, in _error_catcher
    yield
  File "/usr/local/lib/python3.4/site-packages/urllib3/response.py", line 640, in read_chunked
    chunk = self._handle_chunk(amt)
  File "/usr/local/lib/python3.4/site-packages/urllib3/response.py", line 596, in _handle_chunk
    self._fp._safe_read(2)  # Toss the CRLF at the end of the chunk.
  File "/usr/local/lib/python3.4/http/client.py", line 664, in _safe_read
    raise IncompleteRead(b''.join(s), amt)
http.client.IncompleteRead: IncompleteRead(1 bytes read, 1 more expected)
 
During handling of the above exception, another exception occurred:
 
Traceback (most recent call last):
  File "/usr/local/lib/python3.4/site-packages/requests/models.py", line 749, in generate
    for chunk in self.raw.stream(chunk_size, decode_content=True):
  File "/usr/local/lib/python3.4/site-packages/urllib3/response.py", line 461, in stream
    for line in self.read_chunked(amt, decode_content=decode_content):
  File "/usr/local/lib/python3.4/site-packages/urllib3/response.py", line 665, in read_chunked
    self._original_response.close()
  File "/usr/local/lib/python3.4/contextlib.py", line 77, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.4/site-packages/urllib3/response.py", line 349, in _error_catcher
    raise ProtocolError('Connection broken: %r' % e, e)
urllib3.exceptions.ProtocolError: ('Connection broken: IncompleteRead(1 bytes read, 1 more expected)', IncompleteRead(1 bytes read, 1 more expected))
 
During handling of the above exception, another exception occurred:
 
Traceback (most recent call last):
  File "./inventory.py", line 469, in <module>
    total_records = GetIPAMRecords(data_for, cfg)
  File "./inventory.py", line 250, in GetIPAMRecords
    total_records += GetDataInDomain(args, data_for, srch_rec)
  File "./inventory.py", line 219, in GetDataInDomain
    newresponse = GetResponse(params, url, acct, passwd)
  File "./inventory.py", line 122, in GetResponse
    auth=(str(acct),str(passwd))
  File "/usr/local/lib/python3.4/site-packages/requests/api.py", line 58, in request
    return session.request(method=method, url=url, **kwargs)
  File "/usr/local/lib/python3.4/site-packages/requests/sessions.py", line 512, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/local/lib/python3.4/site-packages/requests/sessions.py", line 662, in send
    r.content
  File "/usr/local/lib/python3.4/site-packages/requests/models.py", line 827, in content
    self._content = b''.join(self.iter_content(CONTENT_CHUNK_SIZE)) or b''
  File "/usr/local/lib/python3.4/site-packages/requests/models.py", line 752, in generate
    raise ChunkedEncodingError(e)
requests.exceptions.ChunkedEncodingError: ('Connection broken: IncompleteRead(1 bytes read, 1 more expected)', IncompleteRead(1 bytes read, 1 more expected))

 

 

Re: IncompleteRead during paginated request?

Moderator
Moderator
Posts: 287
5432     0

Can you post this portion of your code?

 

How many objects & pages & objects per page are you pulling?  And are you pulling all paged relatively quickly, or are they pulled one page at a time over a longer timeframe?

 

I'm wondering if some part of the data is expired or modified, before it can all be pulleds, thus expiring the dataset you are pulling.

Showing results for 
Search instead for 
Did you mean: 

Recommended for You