Avoiding redirection


Answers

The original URL has nothing to scrape. It returned 302, meaning there is no body, and the Location header indicates where to redirect to. You need to figure out how to access the URL without being redirected, perhaps by authenticating.

Question

I'm trying to parse a site(written in ASP) and the crawler gets redirected to the main site. But what I'd like to do is to parse the given url, not the redirected one. Is there a way to do this?. I tried adding "REDIRECT=False" to the settings.py file without success.

Here's some output from the crawler:

2011-09-24 20:01:11-0300 [coto] DEBUG: Redirecting (302) to <GET http://www.cotodigital.com.ar/default.asp> from <GET http://www.cotodigital.com.ar/l.asp?cat=500&id=500>
2011-09-24 20:01:11-0300 [coto] DEBUG: Redirecting (302) to <GET http://www.cotodigital.com.ar/default.asp> from <GET http://www.cotodigital.com.ar/l.asp?cat=1513&id=1513>
2011-09-24 20:01:11-0300 [coto] DEBUG: Redirecting (302) to <GET http://www.cotodigital.com.ar/default.asp> from <GET http://www.cotodigital.com.ar/l.asp?cat=476&id=476>
2011-09-24 20:01:11-0300 [coto] DEBUG: Redirecting (302) to <GET http://www.cotodigital.com.ar/default.asp> from <GET http://www.cotodigital.com.ar/l.asp?cat=472&id=472>
2011-09-24 20:01:11-0300 [coto] DEBUG: Redirecting (302) to <GET http://www.cotodigital.com.ar/default.asp> from <GET http://www.cotodigital.com.ar/l.asp?cat=457&id=457>
2011-09-24 20:01:11-0300 [coto] DEBUG: Redirecting (302) to <GET http://www.cotodigital.com.ar/default.asp> from <GET http://www.cotodigital.com.ar/l.asp?cat=1097&id=1097>



I had an issue with infinite loop on redirections when using HTTPCACHE_ENABLED = True. I managed to avoid the problem by setting HTTPCACHE_IGNORE_HTTP_CODES = [301,302].




how to handle 302 redirect in scrapy

Forgot about middlewares in this scenario, this will do the trick:

meta = {'dont_redirect': True,'handle_httpstatus_list': [302]}

That said, you will need to include meta parameter when you yield your request:

yield Request(item['link'],meta = {
                  'dont_redirect': True,
                  'handle_httpstatus_list': [302]
              }, callback=self.your_callback)



from scrapy.http.cookies import CookieJar
...

class Spider(BaseSpider):
    def parse(self, response):
        '''Parse category page, extract subcategories links.'''

        hxs = HtmlXPathSelector(response)
        subcategories = hxs.select(".../@href")
        for subcategorySearchLink in subcategories:
            subcategorySearchLink = urlparse.urljoin(response.url, subcategorySearchLink)
            self.log('Found subcategory link: ' + subcategorySearchLink), log.DEBUG)
            yield Request(subcategorySearchLink, callback = self.extractItemLinks,
                          meta = {'dont_merge_cookies': True})
            '''Use dont_merge_cookies to force site generate new PHPSESSID cookie.
            This is needed because the site uses sessions to remember the search parameters.'''

    def extractItemLinks(self, response):
        '''Extract item links from subcategory page and go to next page.'''
        hxs = HtmlXPathSelector(response)
        for itemLink in hxs.select(".../a/@href"):
            itemLink = urlparse.urljoin(response.url, itemLink)
            print 'Requesting item page %s' % itemLink
            yield Request(...)

        nextPageLink = self.getFirst(".../@href", hxs)
        if nextPageLink:
            nextPageLink = urlparse.urljoin(response.url, nextPageLink)
            self.log('\nGoing to next search page: ' + nextPageLink + '\n', log.DEBUG)
            cookieJar = response.meta.setdefault('cookie_jar', CookieJar())
            cookieJar.extract_cookies(response, response.request)
            request = Request(nextPageLink, callback = self.extractItemLinks,
                          meta = {'dont_merge_cookies': True, 'cookie_jar': cookieJar})
            cookieJar.add_cookie_header(request) # apply Set-Cookie ourselves
            yield request
        else:
            self.log('Whole subcategory scraped.', log.DEBUG)




Tags