🐐 GOAT Shell

Current path: tmp/



⬆️ Go up:

📄 Viewing: phpz568bg

�
zfc@sOdZdZddlZddlZddlZddlZddlZyddlZWnek
rwddl	ZnXdddddd	d
ddd
dgZ
eed�r�e
jddddg�nd�Z
dd&d��YZdefd��YZdefd��YZdd'd��YZd
d(d��YZdeefd��YZdeefd��YZdeefd��YZd	eefd��YZeed�rdefd��YZdefd ��YZdeefd!��YZdeefd"��YZnd
d)d#��YZdefd$��YZdefd%��YZdS(*s�Generic socket server classes.

This module tries to capture the various aspects of defining a server:

For socket-based servers:

- address family:
        - AF_INET{,6}: IP (Internet Protocol) sockets (default)
        - AF_UNIX: Unix domain sockets
        - others, e.g. AF_DECNET are conceivable (see <socket.h>
- socket type:
        - SOCK_STREAM (reliable stream, e.g. TCP)
        - SOCK_DGRAM (datagrams, e.g. UDP)

For request-based servers (including socket-based):

- client address verification before further looking at the request
        (This is actually a hook for any processing that needs to look
         at the request before anything else, e.g. logging)
- how to handle multiple requests:
        - synchronous (one request is handled at a time)
        - forking (each request is handled by a new process)
        - threading (each request is handled by a new thread)

The classes in this module favor the server type that is simplest to
write: a synchronous TCP/IP server.  This is bad class design, but
save some typing.  (There's also the issue that a deep class hierarchy
slows down method lookups.)

There are five classes in an inheritance diagram, four of which represent
synchronous servers of four types:

        +------------+
        | BaseServer |
        +------------+
              |
              v
        +-----------+        +------------------+
        | TCPServer |------->| UnixStreamServer |
        +-----------+        +------------------+
              |
              v
        +-----------+        +--------------------+
        | UDPServer |------->| UnixDatagramServer |
        +-----------+        +--------------------+

Note that UnixDatagramServer derives from UDPServer, not from
UnixStreamServer -- the only difference between an IP and a Unix
stream server is the address family, which is simply repeated in both
unix server classes.

Forking and threading versions of each type of server can be created
using the ForkingMixIn and ThreadingMixIn mix-in classes.  For
instance, a threading UDP server class is created as follows:

        class ThreadingUDPServer(ThreadingMixIn, UDPServer): pass

The Mix-in class must come first, since it overrides a method defined
in UDPServer! Setting the various member variables also changes
the behavior of the underlying server mechanism.

To implement a service, you must derive a class from
BaseRequestHandler and redefine its handle() method.  You can then run
various versions of the service by combining one of the server classes
with your request handler class.

The request handler class must be different for datagram or stream
services.  This can be hidden by using the request handler
subclasses StreamRequestHandler or DatagramRequestHandler.

Of course, you still have to use your head!

For instance, it makes no sense to use a forking server if the service
contains state in memory that can be modified by requests (since the
modifications in the child process would never reach the initial state
kept in the parent process and passed to each child).  In this case,
you can use a threading server, but you will probably have to use
locks to avoid two requests that come in nearly simultaneous to apply
conflicting changes to the server state.

On the other hand, if you are building e.g. an HTTP server, where all
data is stored externally (e.g. in the file system), a synchronous
class will essentially render the service "deaf" while one request is
being handled -- which may be for a very long time if a client is slow
to read all the data it has requested.  Here a threading or forking
server is appropriate.

In some cases, it may be appropriate to process part of a request
synchronously, but to finish processing in a forked child depending on
the request data.  This can be implemented by using a synchronous
server and doing an explicit fork in the request handler class
handle() method.

Another approach to handling multiple simultaneous requests in an
environment that supports neither threads nor fork (or where these are
too expensive or inappropriate for the service) is to maintain an
explicit table of partially finished requests and to use select() to
decide which request to work on next (or whether to handle a new
incoming request).  This is particularly important for stream services
where each client can potentially be connected for a long time (if
threads or subprocesses cannot be used).

Future work:
- Standard classes for Sun RPC (which uses either UDP or TCP)
- Standard mix-in classes to implement various authentication
  and encryption schemes
- Standard framework for select-based multiplexing

XXX Open problems:
- What to do with out-of-band data?

BaseServer:
- split generic "request" functionality out into BaseServer class.
  Copyright (C) 2000  Luke Kenneth Casson Leighton <lkcl@samba.org>

  example: read entries from a SQL database (requires overriding
  get_request() to return a table entry from the database).
  entry is processed by a RequestHandlerClass.

s0.4i����Nt	TCPServert	UDPServertForkingUDPServertForkingTCPServertThreadingUDPServertThreadingTCPServertBaseRequestHandlertStreamRequestHandlertDatagramRequestHandlertThreadingMixIntForkingMixIntAF_UNIXtUnixStreamServertUnixDatagramServertThreadingUnixStreamServertThreadingUnixDatagramServercGsZxStrUy||�SWqttjfk
rQ}|jdtjkrR�qRqXqWdS(s*restart a system call interrupted by EINTRiN(tTruetOSErrortselectterrortargsterrnotEINTR(tfuncRte((s$/usr/lib64/python2.7/SocketServer.pyt_eintr_retry�s	t
BaseServercBs�eZdZdZd�Zd�Zdd�Zd�Zd�Z	d�Z
d�Zd	�Zd
�Z
d�Zd�Zd
�Zd�Zd�ZRS(s�Base class for server classes.

    Methods for the caller:

    - __init__(server_address, RequestHandlerClass)
    - serve_forever(poll_interval=0.5)
    - shutdown()
    - handle_request()  # if you do not use serve_forever()
    - fileno() -> int   # for select()

    Methods that may be overridden:

    - server_bind()
    - server_activate()
    - get_request() -> request, client_address
    - handle_timeout()
    - verify_request(request, client_address)
    - server_close()
    - process_request(request, client_address)
    - shutdown_request(request)
    - close_request(request)
    - handle_error()

    Methods for derived classes:

    - finish_request(request, client_address)

    Class variables that may be overridden by derived classes or
    instances:

    - timeout
    - address_family
    - socket_type
    - allow_reuse_address

    Instance variables:

    - RequestHandlerClass
    - socket

    cCs.||_||_tj�|_t|_dS(s/Constructor.  May be extended, do not override.N(tserver_addresstRequestHandlerClasst	threadingtEventt_BaseServer__is_shut_downtFalset_BaseServer__shutdown_request(tselfRR((s$/usr/lib64/python2.7/SocketServer.pyt__init__�s		cCsdS(sSCalled by constructor to activate the server.

        May be overridden.

        N((R"((s$/usr/lib64/python2.7/SocketServer.pytserver_activate�sg�?cCs�|jj�zaxZ|jslttj|ggg|�\}}}|jrPPn||kr|j�qqWWdt|_|jj�XdS(s�Handle one request at a time until shutdown.

        Polls for shutdown every poll_interval seconds. Ignores
        self.timeout. If you need to do periodic tasks, do them in
        another thread.
        N(RtclearR!RRt_handle_request_noblockR tset(R"t
poll_intervaltrtwR((s$/usr/lib64/python2.7/SocketServer.pyt
serve_forever�s
		cCst|_|jj�dS(s�Stops the serve_forever loop.

        Blocks until the loop has finished. This must be called while
        serve_forever() is running in another thread, or it will
        deadlock.
        N(RR!Rtwait(R"((s$/usr/lib64/python2.7/SocketServer.pytshutdown�s	cCs�|jj�}|dkr'|j}n$|jdk	rKt||j�}nttj|ggg|�}|ds�|j�dS|j�dS(sOHandle one request, possibly blocking.

        Respects self.timeout.
        iN(	tsockett
gettimeouttNonettimeouttminRRthandle_timeoutR&(R"R1tfd_sets((s$/usr/lib64/python2.7/SocketServer.pythandle_requests

cCs�y|j�\}}Wntjk
r-dSX|j||�r~y|j||�Wq�|j||�|j|�q�Xn
|j|�dS(s�Handle one request, without blocking.

        I assume that select.select has returned that the socket is
        readable before this function was called, so there should be
        no risk of blocking in get_request().
        N(tget_requestR.Rtverify_requesttprocess_requestthandle_errortshutdown_request(R"trequesttclient_address((s$/usr/lib64/python2.7/SocketServer.pyR&scCsdS(scCalled if no new request arrives within self.timeout.

        Overridden by ForkingMixIn.
        N((R"((s$/usr/lib64/python2.7/SocketServer.pyR3,scCstS(snVerify the request.  May be overridden.

        Return True if we should proceed with this request.

        (R(R"R;R<((s$/usr/lib64/python2.7/SocketServer.pyR73scCs!|j||�|j|�dS(sVCall finish_request.

        Overridden by ForkingMixIn and ThreadingMixIn.

        N(tfinish_requestR:(R"R;R<((s$/usr/lib64/python2.7/SocketServer.pyR8;scCsdS(sDCalled to clean-up the server.

        May be overridden.

        N((R"((s$/usr/lib64/python2.7/SocketServer.pytserver_closeDscCs|j|||�dS(s8Finish one request by instantiating RequestHandlerClass.N(R(R"R;R<((s$/usr/lib64/python2.7/SocketServer.pyR=LscCs|j|�dS(s3Called to shutdown and close an individual request.N(t
close_request(R"R;((s$/usr/lib64/python2.7/SocketServer.pyR:PscCsdS(s)Called to clean up an individual request.N((R"R;((s$/usr/lib64/python2.7/SocketServer.pyR?TscCs5ddGHdG|GHddl}|j�ddGHdS(stHandle an error gracefully.  May be overridden.

        The default is to print a traceback and continue.

        t-i(s4Exception happened during processing of request fromi����N(t	tracebackt	print_exc(R"R;R<RA((s$/usr/lib64/python2.7/SocketServer.pyR9Xs	
N(t__name__t
__module__t__doc__R0R1R#R$R+R-R5R&R3R7R8R>R=R:R?R9(((s$/usr/lib64/python2.7/SocketServer.pyR�s *													cBsweZdZejZejZdZe	Z
ed�Zd�Z
d�Zd�Zd�Zd�Zd�Zd	�ZRS(
s3Base class for various socket-based server classes.

    Defaults to synchronous IP stream (i.e., TCP).

    Methods for the caller:

    - __init__(server_address, RequestHandlerClass, bind_and_activate=True)
    - serve_forever(poll_interval=0.5)
    - shutdown()
    - handle_request()  # if you don't use serve_forever()
    - fileno() -> int   # for select()

    Methods that may be overridden:

    - server_bind()
    - server_activate()
    - get_request() -> request, client_address
    - handle_timeout()
    - verify_request(request, client_address)
    - process_request(request, client_address)
    - shutdown_request(request)
    - close_request(request)
    - handle_error()

    Methods for derived classes:

    - finish_request(request, client_address)

    Class variables that may be overridden by derived classes or
    instances:

    - timeout
    - address_family
    - socket_type
    - request_queue_size (only for stream sockets)
    - allow_reuse_address

    Instance variables:

    - server_address
    - RequestHandlerClass
    - socket

    icCsjtj|||�tj|j|j�|_|rfy|j�|j�Wqf|j��qfXndS(s/Constructor.  May be extended, do not override.N(RR#R.taddress_familytsocket_typetserver_bindR$R>(R"RRtbind_and_activate((s$/usr/lib64/python2.7/SocketServer.pyR#�s

cCsQ|jr(|jjtjtjd�n|jj|j�|jj�|_dS(sOCalled by constructor to bind the socket.

        May be overridden.

        iN(tallow_reuse_addressR.t
setsockoptt
SOL_SOCKETtSO_REUSEADDRtbindRtgetsockname(R"((s$/usr/lib64/python2.7/SocketServer.pyRH�s	cCs|jj|j�dS(sSCalled by constructor to activate the server.

        May be overridden.

        N(R.tlistentrequest_queue_size(R"((s$/usr/lib64/python2.7/SocketServer.pyR$�scCs|jj�dS(sDCalled to clean-up the server.

        May be overridden.

        N(R.tclose(R"((s$/usr/lib64/python2.7/SocketServer.pyR>�scCs
|jj�S(sMReturn socket file number.

        Interface required by select().

        (R.tfileno(R"((s$/usr/lib64/python2.7/SocketServer.pyRS�scCs
|jj�S(sYGet the request and client address from the socket.

        May be overridden.

        (R.taccept(R"((s$/usr/lib64/python2.7/SocketServer.pyR6�scCs<y|jtj�Wntjk
r*nX|j|�dS(s3Called to shutdown and close an individual request.N(R-R.tSHUT_WRRR?(R"R;((s$/usr/lib64/python2.7/SocketServer.pyR:�s
cCs|j�dS(s)Called to clean up an individual request.N(RR(R"R;((s$/usr/lib64/python2.7/SocketServer.pyR?�s(RCRDRER.tAF_INETRFtSOCK_STREAMRGRQR RJRR#RHR$R>RSR6R:R?(((s$/usr/lib64/python2.7/SocketServer.pyRfs-		
						
cBsGeZdZeZejZdZd�Z	d�Z
d�Zd�ZRS(sUDP server class.i cCs.|jj|j�\}}||jf|fS(N(R.trecvfromtmax_packet_size(R"tdatatclient_addr((s$/usr/lib64/python2.7/SocketServer.pyR6�scCsdS(N((R"((s$/usr/lib64/python2.7/SocketServer.pyR$�scCs|j|�dS(N(R?(R"R;((s$/usr/lib64/python2.7/SocketServer.pyR:�scCsdS(N((R"R;((s$/usr/lib64/python2.7/SocketServer.pyR?�s(
RCRDRER RJR.t
SOCK_DGRAMRGRYR6R$R:R?(((s$/usr/lib64/python2.7/SocketServer.pyR�s				cBs;eZdZdZdZdZd�Zd�Zd�Z	RS(s5Mix-in class to handle each request in a new process.i,i(cCs4|jdkrdSx�t|j�|jkr�y,tjdd�\}}|jj|�Wqtk
r�}|jtj	kr�|jj
�q�|jtjkr�Pq�qXqWx�|jj�D]p}y/tj|tj
�\}}|jj|�Wq�tk
r+}|jtj	kr,|jj|�q,q�Xq�WdS(s7Internal routine to wait for children that have exited.Ni����i(tactive_childrenR0tlentmax_childrentostwaitpidtdiscardRRtECHILDR%RtcopytWNOHANG(R"tpidt_R((s$/usr/lib64/python2.7/SocketServer.pytcollect_childrens$cCs|j�dS(snWait for zombies after self.timeout seconds of inactivity.

        May be extended, do not override.
        N(Rh(R"((s$/usr/lib64/python2.7/SocketServer.pyR3(scCs�|j�tj�}|r[|jdkr:t�|_n|jj|�|j|�dSy.|j||�|j	|�tj
d�Wn9z!|j||�|j	|�Wdtj
d�XnXdS(s-Fork a new subprocess to process the request.Nii(RhR`tforkR]R0R'taddR?R=R:t_exitR9(R"R;R<Rf((s$/usr/lib64/python2.7/SocketServer.pyR8/s"


N(
RCRDRER1R0R]R_RhR3R8(((s$/usr/lib64/python2.7/SocketServer.pyR
�s	"	cBs&eZdZeZd�Zd�ZRS(s4Mix-in class to handle each request in a new thread.cCsLy!|j||�|j|�Wn$|j||�|j|�nXdS(sgSame as in BaseServer but as a thread.

        In addition, exception handling is done here.

        N(R=R:R9(R"R;R<((s$/usr/lib64/python2.7/SocketServer.pytprocess_request_threadPscCs;tjd|jd||f�}|j|_|j�dS(s*Start a new thread to process the request.ttargetRN(RtThreadRltdaemon_threadstdaemontstart(R"R;R<tt((s$/usr/lib64/python2.7/SocketServer.pyR8]s(RCRDRER RoRlR8(((s$/usr/lib64/python2.7/SocketServer.pyR	Is	
cBseZRS((RCRD(((s$/usr/lib64/python2.7/SocketServer.pyRescBseZRS((RCRD(((s$/usr/lib64/python2.7/SocketServer.pyRfscBseZRS((RCRD(((s$/usr/lib64/python2.7/SocketServer.pyRhscBseZRS((RCRD(((s$/usr/lib64/python2.7/SocketServer.pyRiscBseZejZRS((RCRDR.RRF(((s$/usr/lib64/python2.7/SocketServer.pyRmscBseZejZRS((RCRDR.RRF(((s$/usr/lib64/python2.7/SocketServer.pyR
pscBseZRS((RCRD(((s$/usr/lib64/python2.7/SocketServer.pyRsscBseZRS((RCRD(((s$/usr/lib64/python2.7/SocketServer.pyRuscBs2eZdZd�Zd�Zd�Zd�ZRS(s�Base class for request handler classes.

    This class is instantiated for each request to be handled.  The
    constructor sets the instance variables request, client_address
    and server, and then calls the handle() method.  To implement a
    specific service, all you need to do is to derive a class which
    defines a handle() method.

    The handle() method can find the request as self.request, the
    client address as self.client_address, and the server (in case it
    needs access to per-server information) as self.server.  Since a
    separate instance is created for each request, the handle() method
    can define other arbitrary instance variables.

    cCsE||_||_||_|j�z|j�Wd|j�XdS(N(R;R<tservertsetupthandletfinish(R"R;R<Rs((s$/usr/lib64/python2.7/SocketServer.pyR#�s			
cCsdS(N((R"((s$/usr/lib64/python2.7/SocketServer.pyRt�scCsdS(N((R"((s$/usr/lib64/python2.7/SocketServer.pyRu�scCsdS(N((R"((s$/usr/lib64/python2.7/SocketServer.pyRv�s(RCRDRER#RtRuRv(((s$/usr/lib64/python2.7/SocketServer.pyRws
	
		cBs8eZdZdZdZdZeZd�Z	d�Z
RS(s4Define self.rfile and self.wfile for stream sockets.i����icCs�|j|_|jdk	r1|jj|j�n|jrY|jjtjtj	t
�n|jjd|j�|_
|jjd|j�|_dS(Ntrbtwb(R;t
connectionR1R0t
settimeouttdisable_nagle_algorithmRKR.tIPPROTO_TCPtTCP_NODELAYRtmakefiletrbufsizetrfiletwbufsizetwfile(R"((s$/usr/lib64/python2.7/SocketServer.pyRt�s	cCsU|jjs7y|jj�Wq7tjk
r3q7Xn|jj�|jj�dS(N(R�tclosedtflushR.RRRR�(R"((s$/usr/lib64/python2.7/SocketServer.pyRv�s
N(RCRDRERR�R0R1R R{RtRv(((s$/usr/lib64/python2.7/SocketServer.pyR�s		
cBs eZdZd�Zd�ZRS(s6Define self.rfile and self.wfile for datagram sockets.cCsoyddlm}Wn!tk
r7ddlm}nX|j\|_|_||j�|_|�|_dS(Ni����(tStringIO(t	cStringIOR�tImportErrorR;tpacketR.R�R�(R"R�((s$/usr/lib64/python2.7/SocketServer.pyRt�s
cCs#|jj|jj�|j�dS(N(R.tsendtoR�tgetvalueR<(R"((s$/usr/lib64/python2.7/SocketServer.pyRv�s(RCRDRERtRv(((s$/usr/lib64/python2.7/SocketServer.pyR�s		(((((REt__version__R.RtsysR`RRR�tdummy_threadingt__all__thasattrtextendRRRRR
R	RRRRRR
RRRRR(((s$/usr/lib64/python2.7/SocketServer.pyt<module>xsH
	
		�~K.+"""Macintosh-specific module for conversion between pathnames and URLs.

Do not import directly; use urllib instead."""

import urllib
import os

__all__ = ["url2pathname","pathname2url"]

def url2pathname(pathname):
    """OS-specific conversion from a relative URL of the 'file' scheme
    to a file system path; not recommended for general use."""
    #
    # XXXX The .. handling should be fixed...
    #
    tp = urllib.splittype(pathname)[0]
    if tp and tp != 'file':
        raise RuntimeError, 'Cannot convert non-local URL to pathname'
    # Turn starting /// into /, an empty hostname means current host
    if pathname[:3] == '///':
        pathname = pathname[2:]
    elif pathname[:2] == '//':
        raise RuntimeError, 'Cannot convert non-local URL to pathname'
    components = pathname.split('/')
    # Remove . and embedded ..
    i = 0
    while i < len(components):
        if components[i] == '.':
            del components[i]
        elif components[i] == '..' and i > 0 and \
                                  components[i-1] not in ('', '..'):
            del components[i-1:i+1]
            i = i-1
        elif components[i] == '' and i > 0 and components[i-1] != '':
            del components[i]
        else:
            i = i+1
    if not components[0]:
        # Absolute unix path, don't start with colon
        rv = ':'.join(components[1:])
    else:
        # relative unix path, start with colon. First replace
        # leading .. by empty strings (giving ::file)
        i = 0
        while i < len(components) and components[i] == '..':
            components[i] = ''
            i = i + 1
        rv = ':' + ':'.join(components)
    # and finally unquote slashes and other funny characters
    return urllib.unquote(rv)

def pathname2url(pathname):
    """OS-specific conversion from a file system path to a relative URL
    of the 'file' scheme; not recommended for general use."""
    if '/' in pathname:
        raise RuntimeError, "Cannot convert pathname containing slashes"
    components = pathname.split(':')
    # Remove empty first and/or last component
    if components[0] == '':
        del components[0]
    if components[-1] == '':
        del components[-1]
    # Replace empty string ('::') by .. (will result in '/../' later)
    for i in range(len(components)):
        if components[i] == '':
            components[i] = '..'
    # Truncate names longer than 31 bytes
    components = map(_pncomp2url, components)

    if os.path.isabs(pathname):
        return '/' + '/'.join(components)
    else:
        return '/'.join(components)

def _pncomp2url(component):
    component = urllib.quote(component[:31], safe='')  # We want to quote slashes
    return component
�
zfc@s�dZddlZddlZddlZddddddd	d
ddd
dddddddgZeZdefd��YZdefd��YZ	dZ
dZdddddddddddddg
Zdd?d��YZ
dd@d ��YZed!�Zed"�Ze
d#�Ze
d$�Zed%�\ZZZZZZZd&�Zd'�Zd(�Zd)�Zd*efd+��YZ d,e fd-��YZ!d.e fd/��YZ"d0dAd1��YZ#d2e!fd3��YZ$d4e"fd5��YZ%e!�Z&e&j'Z(d6�Z)e&j*Z+e&j,Z,e&j-Z.e&j/Z0e&j1Z1e&j2Z3e&j4Z5e&j6Z7dCZ8d8Z9e8e9d9�Z:e8e9d:�Z;d;Z<ej=e<dd�j>�Z?d<�Z@d=�ZAeBd>kr�eAejC�ndS(Ds$Calendar printing functions

Note when comparing these calendars to the ones printed by cal(1): By
default, these calendars have Monday as the first day of the week, and
Sunday as the last (the European convention). Use setfirstweekday() to
set the first day of the week (0=Monday, 6=Sunday).i����NtIllegalMonthErrortIllegalWeekdayErrortsetfirstweekdaytfirstweekdaytisleaptleapdaystweekdayt
monthranget
monthcalendartprmonthtmonthtprcaltcalendarttimegmt
month_namet
month_abbrtday_nametday_abbrcBseZd�Zd�ZRS(cCs
||_dS(N(R
(tselfR
((s /usr/lib64/python2.7/calendar.pyt__init__scCsd|jS(Ns!bad month number %r; must be 1-12(R
(R((s /usr/lib64/python2.7/calendar.pyt__str__s(t__name__t
__module__RR(((s /usr/lib64/python2.7/calendar.pyRs	cBseZd�Zd�ZRS(cCs
||_dS(N(R(RR((s /usr/lib64/python2.7/calendar.pyRscCsd|jS(Ns7bad weekday number %r; must be 0 (Monday) to 6 (Sunday)(R(R((s /usr/lib64/python2.7/calendar.pyRs(RRRR(((s /usr/lib64/python2.7/calendar.pyRs	iiiiiit_localized_monthcBskeZged�D]"Zejdedd�j^qZejdd��d�Z	d�Z
d�ZRS(ii�iicCsdS(Nt((tx((s /usr/lib64/python2.7/calendar.pyt<lambda>2RcCs
||_dS(N(tformat(RR((s /usr/lib64/python2.7/calendar.pyR4scCsM|j|}t|t�r<g|D]}||j�^q#S||j�SdS(N(t_monthst
isinstancetsliceR(Rtitfuncstf((s /usr/lib64/python2.7/calendar.pyt__getitem__7s
 cCsdS(Ni
((R((s /usr/lib64/python2.7/calendar.pyt__len__>s(RRtrangeRtdatetimetdatetstrftimeRtinsertRR"R#(((s /usr/lib64/python2.7/calendar.pyR/s
5		t_localized_daycBsXeZged�D]"Zejdded�j^qZd�Zd�Z	d�Z
RS(ii�icCs
||_dS(N(R(RR((s /usr/lib64/python2.7/calendar.pyRGscCsM|j|}t|t�r<g|D]}||j�^q#S||j�SdS(N(t_daysRRR(RRR R!((s /usr/lib64/python2.7/calendar.pyR"Js
 cCsdS(Ni((R((s /usr/lib64/python2.7/calendar.pyR#Qs(RRR$RR%R&R'R*RR"R#(((s /usr/lib64/python2.7/calendar.pyR)Bs5		s%As%as%Bs%bicCs.|ddko-|ddkp-|ddkS(s5Return True for leap years, False for non-leap years.iiidi�((tyear((s /usr/lib64/python2.7/calendar.pyRascCsD|d8}|d8}|d|d|d|d|d|dS(sFReturn number of leap years in range [y1, y2).
       Assume y1 <= y2.iiidi�((ty1ty2((s /usr/lib64/python2.7/calendar.pyRfs

cCstj|||�j�S(sTReturn weekday (0-6 ~ Mon-Sun) for year (1970-...), month (1-12),
       day (1-31).(R%R&R(R+R
tday((s /usr/lib64/python2.7/calendar.pyRnscCsgd|kodkns+t|��nt||d�}t||tkoYt|�}||fS(sQReturn weekday (0-6 ~ Mon-Sun) and number of days (28-31) for
       year, month.ii(RRtmdaystFebruaryR(R+R
tday1tndays((s /usr/lib64/python2.7/calendar.pyRts
 tCalendarcBs�eZdZdd�Zd�Zd�Zeee�Zd�Zd�Z	d�Z
d�Zd	�Zd
�Z
d�Zdd
�Zdd�Zdd�ZRS(so
    Base calendar class. This class doesn't do any formatting. It simply
    provides data to subclasses.
    icCs
||_dS(N(R(RR((s /usr/lib64/python2.7/calendar.pyR�scCs|jdS(Ni(t
_firstweekday(R((s /usr/lib64/python2.7/calendar.pytgetfirstweekday�scCs
||_dS(N(R4(RR((s /usr/lib64/python2.7/calendar.pyR�sccs1x*t|j|jd�D]}|dVqWdS(st
        Return an iterator for one week of weekday numbers starting with the
        configured first one.
        iN(R$R(RR((s /usr/lib64/python2.7/calendar.pytiterweekdays�s ccs�tj||d�}|j�|jd}|tjd|�8}tjdd�}xZtr�|Vy||7}Wntk
r�PnX|j|krW|j�|jkrWPqWqWWdS(s�
        Return an iterator for one month. The iterator will yield datetime.date
        values and will always iterate through complete weeks, so it will yield
        dates outside the specified month.
        iitdaysN(R%R&RRt	timedeltatTruet
OverflowErrorR
(RR+R
R&R7toneday((s /usr/lib64/python2.7/calendar.pytitermonthdates�s	
$ccsBx;t|j||�|j�D]\}}||dfVqWdS(s�
        Like itermonthdates(), but will yield (day number, weekday number)
        tuples. For days outside the specified month the day number is 0.
        iN(t	enumeratet
itermonthdaysR(RR+R
Rtd((s /usr/lib64/python2.7/calendar.pytitermonthdays2�s+c	cs�t||�\}}||jd}xt|�D]}dVq3Wx td|d�D]}|VqVW|j||d}xt|�D]}dVq�WdS(s�
        Like itermonthdates(), but will yield day numbers. For days outside
        the specified month the day number is 0.
        iiiN(RRR$(	RR+R
R1R2tdays_beforet_R?t
days_after((s /usr/lib64/python2.7/calendar.pyR>�s		cCsLt|j||��}gtdt|�d�D]}|||d!^q1S(s�
        Return a matrix (list of lists) representing a month's calendar.
        Each row represents a week; week entries are datetime.date values.
        ii(tlistR<R$tlen(RR+R
tdatesR((s /usr/lib64/python2.7/calendar.pytmonthdatescalendar�scCsLt|j||��}gtdt|�d�D]}|||d!^q1S(s�
        Return a matrix representing a month's calendar.
        Each row represents a week; week entries are
        (day number, weekday number) tuples. Day numbers outside this month
        are zero.
        ii(RDR@R$RE(RR+R
R7R((s /usr/lib64/python2.7/calendar.pytmonthdays2calendar�scCsLt|j||��}gtdt|�d�D]}|||d!^q1S(s�
        Return a matrix representing a month's calendar.
        Each row represents a week; days outside this month are zero.
        ii(RDR>R$RE(RR+R
R7R((s /usr/lib64/python2.7/calendar.pytmonthdayscalendar�sicCsfgtttd�D]}|j||�^q}gtdt|�|�D]}||||!^qKS(s'
        Return the data for the specified year ready for formatting. The return
        value is a list of month rows. Each month row contains up to width months.
        Each month contains between 4 and 6 weeks and each week contains 1-7
        days. Days are datetime.date objects.
        ii(R$tJanuaryRGRE(RR+twidthRtmonths((s /usr/lib64/python2.7/calendar.pytyeardatescalendar�s/cCsfgtttd�D]}|j||�^q}gtdt|�|�D]}||||!^qKS(s�
        Return the data for the specified year ready for formatting (similar to
        yeardatescalendar()). Entries in the week lists are
        (day number, weekday number) tuples. Day numbers outside this month are
        zero.
        ii(R$RJRHRE(RR+RKRRL((s /usr/lib64/python2.7/calendar.pytyeardays2calendar�s/cCsfgtttd�D]}|j||�^q}gtdt|�|�D]}||||!^qKS(s�
        Return the data for the specified year ready for formatting (similar to
        yeardatescalendar()). Entries in the week lists are day numbers.
        Day numbers outside this month are zero.
        ii(R$RJRIRE(RR+RKRRL((s /usr/lib64/python2.7/calendar.pytyeardayscalendar�s/(RRt__doc__RR5RtpropertyRR6R<R@R>RGRHRIRMRNRO(((s /usr/lib64/python2.7/calendar.pyR3~s								
	

tTextCalendarcBs�eZdZd�Zd�Zd�Zd�Zd�Zed�Z	ddd�Z
ddd	�Zd
ddd
d�Zdddd
d�Z
RS(sr
    Subclass of Calendar that outputs a calendar as a simple plain text
    similar to the UNIX program cal.
    cCs|j||�GdS(s3
        Print a single week (no newline).
        N(t
formatweek(RttheweekRK((s /usr/lib64/python2.7/calendar.pytprweek
scCs,|dkrd}n
d|}|j|�S(s*
        Returns a formatted day.
        iRs%2i(tcenter(RR.RRKts((s /usr/lib64/python2.7/calendar.pyt	formatdays	
cs dj��fd�|D��S(sA
        Returns a single week in a string (no newline).
        t c3s*|] \}}�j||��VqdS(N(RX(t.0R?twd(RRK(s /usr/lib64/python2.7/calendar.pys	<genexpr>s(tjoin(RRTRK((RRKs /usr/lib64/python2.7/calendar.pyRSscCs0|dkrt}nt}||| j|�S(s4
        Returns a formatted week day name.
        i	(RRRV(RR.RKtnames((s /usr/lib64/python2.7/calendar.pyt
formatweekday s	cs&dj��fd��j�D��S(s-
        Return a header for a week.
        RYc3s!|]}�j|��VqdS(N(R^(RZR(RRK(s /usr/lib64/python2.7/calendar.pys	<genexpr>.s(R\R6(RRK((RRKs /usr/lib64/python2.7/calendar.pytformatweekheader*scCs0t|}|r#d||f}n|j|�S(s0
        Return a formatted month name.
        s%s %r(RRV(RttheyeartthemonthRKtwithyearRW((s /usr/lib64/python2.7/calendar.pytformatmonthname0s
icCs|j||||�GdS(s+
        Print a month's calendar.
        N(tformatmonth(RR`Ratwtl((s /usr/lib64/python2.7/calendar.pyR	9scCs�td|�}td|�}|j||d|dd�}|j�}|d|7}||j|�j�7}|d|7}xD|j||�D]0}||j||�j�7}|d|7}q�W|S(s@
        Return a month's calendar string (multi-line).
        iiis
(tmaxRctrstripR_RHRS(RR`RaReRfRWtweek((s /usr/lib64/python2.7/calendar.pyRd?s!iiiics=td|�}td|�}td|�}|ddd�g}|j}|t��j�|||d�j��|d|��j|��x�t�j�|��D]y\}}	t||dt	||ddd��}
|d|����fd�|
D�}|t
|�|�j��|d|��fd�|
D�}|t
|�|�j��|d|�td�|	D��}
x�t|
�D]�}g}xM|	D]E}|t|�kr�|jd	�q�|j�j|||��q�W|t
|�|�j��|d|�q�Wq�Wd	j
|�S(
sC
        Returns a year's calendar as a multi-line string.
        iiis
i
c3s'|]}�j�|�t�VqdS(N(RctFalse(RZtk(tcolwidthRR`(s /usr/lib64/python2.7/calendar.pys	<genexpr>`sc3s|]}�VqdS(N((RZRk(theader(s /usr/lib64/python2.7/calendar.pys	<genexpr>dscss|]}t|�VqdS(N(RE(RZtcal((s /usr/lib64/python2.7/calendar.pys	<genexpr>hsR(RgtappendtreprRVRhR_R=RNR$tmintformatstringRERSR\(RR`ReRftctmtvtaRtrowRLR]theaderstheighttjtweeksRn((RlRmRR`s /usr/lib64/python2.7/calendar.pyt
formatyearOs:	/%,

!cCs|j|||||�GHdS(sPrint a year's calendar.N(R|(RR`ReRfRsRt((s /usr/lib64/python2.7/calendar.pytpryearts(RRRPRURXRSR^R_R9RcR	RdR|R}(((s /usr/lib64/python2.7/calendar.pyRRs		
		
		%tHTMLCalendarcBs�eZdZdddddddgZd�Zd	�Zd
�Zd�Zed�Z	ed
�Z
dd�Zdddd�Z
RS(s4
    This calendar returns complete HTML pages.
    tmonttuetwedtthutfritsattsuncCs)|dkrdSd|j||fSdS(s/
        Return a day as a table cell.
        is<td class="noday">&nbsp;</td>s<td class="%s">%d</td>N(t
cssclasses(RR.R((s /usr/lib64/python2.7/calendar.pyRX�scs'dj�fd�|D��}d|S(s8
        Return a complete week as a table row.
        Rc3s'|]\}}�j||�VqdS(N(RX(RZR?R[(R(s /usr/lib64/python2.7/calendar.pys	<genexpr>�ss<tr>%s</tr>(R\(RRTRW((Rs /usr/lib64/python2.7/calendar.pyRS�scCsd|j|t|fS(s:
        Return a weekday name as a table header.
        s<th class="%s">%s</th>(R�R(RR.((s /usr/lib64/python2.7/calendar.pyR^�scs-dj�fd��j�D��}d|S(s<
        Return a header for a week as a table row.
        Rc3s|]}�j|�VqdS(N(R^(RZR(R(s /usr/lib64/python2.7/calendar.pys	<genexpr>�ss<tr>%s</tr>(R\R6(RRW((Rs /usr/lib64/python2.7/calendar.pyR_�s%cCs3|rdt||f}ndt|}d|S(s5
        Return a month name as a table row.
        s%s %ss%ss.<tr><th colspan="7" class="month">%s</th></tr>(R(RR`RaRbRW((s /usr/lib64/python2.7/calendar.pyRc�scCs�g}|j}|d�|d�||j||d|��|d�||j��|d�x7|j||�D]#}||j|��|d�qvW|d�|d�dj|�S(s6
        Return a formatted month as a table.
        s@<table border="0" cellpadding="0" cellspacing="0" class="month">s
Rbs</table>R(RoRcR_RHRSR\(RR`RaRbRuRvRi((s /usr/lib64/python2.7/calendar.pyRd�s	





icCs�g}|j}t|d�}|d�|d�|d||f�x�tttd|�D]w}t|t||d��}|d�x>|D]6}|d�||j||d	t��|d
�q�W|d�q]W|d�d
j|�S(s?
        Return a formatted year as a table of tables.
        is?<table border="0" cellpadding="0" cellspacing="0" class="year">s
s.<tr><th colspan="%d" class="year">%s</th></tr>ii
s<tr>s<td>Rbs</td>s</tr>s</table>R(RoRgR$RJRqRdRjR\(RR`RKRuRvRRLRt((s /usr/lib64/python2.7/calendar.pyR|�s 	





scalendar.csscCs�|dkrtj�}ng}|j}|d|�|d�|d�|d�|d|�|dk	r�|d|�n|d|�|d�|d	�||j||��|d
�|d�dj|�j|d
�S(sB
        Return a formatted year as a complete HTML page.
        s$<?xml version="1.0" encoding="%s"?>
sn<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
s<html>
s<head>
sC<meta http-equiv="Content-Type" content="text/html; charset=%s" />
s4<link rel="stylesheet" type="text/css" href="%s" />
s<title>Calendar for %d</title>
s</head>
s<body>
s</body>
s</html>
RtxmlcharrefreplaceN(tNonetsystgetdefaultencodingRoR|R\tencode(RR`RKtcsstencodingRuRv((s /usr/lib64/python2.7/calendar.pytformatyearpage�s$	






N(RRRPR�RXRSR^R_R9RcRdR|R�R�(((s /usr/lib64/python2.7/calendar.pyR~ys					
tTimeEncodingcBs#eZd�Zd�Zd�ZRS(cCs
||_dS(N(tlocale(RR�((s /usr/lib64/python2.7/calendar.pyR�scCs?tjtj�|_tjtj|j�tjtj�dS(Ni(t_localet	getlocaletLC_TIMEt	oldlocalet	setlocaleR�(R((s /usr/lib64/python2.7/calendar.pyt	__enter__�scGstjtj|j�dS(N(R�R�R�R�(Rtargs((s /usr/lib64/python2.7/calendar.pyt__exit__�s(RRRR�R�(((s /usr/lib64/python2.7/calendar.pyR��s		tLocaleTextCalendarcBs2eZdZddd�Zd�Zed�ZRS(s
    This class can be passed a locale name in the constructor and will return
    month and weekday names in the specified locale. If this locale includes
    an encoding all strings containing month and weekday names will be returned
    as unicode.
    icCs8tj||�|dkr+tj�}n||_dS(N(RRRR�R�tgetdefaultlocaleR�(RRR�((s /usr/lib64/python2.7/calendar.pyR�scCspt|j��[}|dkr't}nt}||}|dk	rU|j|�}n|| j|�SWdQXdS(Ni	(R�R�RRR�tdecodeRV(RR.RKR�R]tname((s /usr/lib64/python2.7/calendar.pyR^s	
cCsjt|j��U}t|}|dk	r:|j|�}n|rSd||f}n|j|�SWdQXdS(Ns%s %r(R�R�RR�R�RV(RR`RaRKRbR�RW((s /usr/lib64/python2.7/calendar.pyRcs
N(RRRPR�RR^R9Rc(((s /usr/lib64/python2.7/calendar.pyR��s	tLocaleHTMLCalendarcBs2eZdZddd�Zd�Zed�ZRS(s
    This class can be passed a locale name in the constructor and will return
    month and weekday names in the specified locale. If this locale includes
    an encoding all strings containing month and weekday names will be returned
    as unicode.
    icCs8tj||�|dkr+tj�}n||_dS(N(R~RR�R�R�R�(RRR�((s /usr/lib64/python2.7/calendar.pyR scCsYt|j��D}t|}|dk	r:|j|�}nd|j||fSWdQXdS(Ns<th class="%s">%s</th>(R�R�RR�R�R�(RR.R�RW((s /usr/lib64/python2.7/calendar.pyR^&s

cCset|j��P}t|}|dk	r:|j|�}n|rSd||f}nd|SWdQXdS(Ns%s %ss.<tr><th colspan="7" class="month">%s</th></tr>(R�R�RR�R�(RR`RaRbR�RW((s /usr/lib64/python2.7/calendar.pyRc-s
N(RRRPR�RR^R9Rc(((s /usr/lib64/python2.7/calendar.pyR�s	cCscy|jWntk
r*t|��nXt|koBtknsVt|��n|t_dS(N(t	__index__tAttributeErrorRtMONDAYtSUNDAYRsR(R((s /usr/lib64/python2.7/calendar.pyR<s
iicCst|||�GHdS(s1Prints multi-column formatting for year calendarsN(Rr(tcolsRltspacing((s /usr/lib64/python2.7/calendar.pyRTscs'|d9}|j�fd�|D��S(sEReturns a string formatted from n strings, centered within n columns.RYc3s|]}|j��VqdS(N(RV(RZRs(Rl(s /usr/lib64/python2.7/calendar.pys	<genexpr>\s(R\(R�RlR�((Rls /usr/lib64/python2.7/calendar.pyRrYs
i�cCsq|d \}}}}}}tj||d�j�t|d}|d|}|d|}	|	d|}
|
S(sBUnrelated but handy function to calculate Unix timestamp from GMT.iiii<(R%R&t	toordinalt
_EPOCH_ORD(ttupleR+R
R.thourtminutetsecondR7thourstminutestseconds((s /usr/lib64/python2.7/calendar.pyR
cs'c	Cs�ddl}|jdd�}|jdddddd	d
ddd
�|jdddddd	d
ddd�|jdddddd	d
ddd�|jdddddd	d
ddd�|jddddd
d dd!�|jd"d#dd$d
ddd%�|jd&d'dd(d
ddd)�|jd*d+ddd
d,d-d6dd/�|j|�\}}|jr�|jr�|jd0�tj	d�n|j|jf}|j
d.kr�|jr�td$|�}n	t�}|j}|dkr�tj
�}ntd(|d|j�}t|�dkrD|jtjj�j|�GHq�t|�dkrt|jt|d�|�GHq�|jd1�tj	d�nM|jr�td$|�}n	t�}td2|jd3|j�}t|�dkr�|j|d4<|j|d5<nt|�dkr2|jtjj�j|�}n�t|�dkrc|jt|d�|�}nXt|�dkr�|jt|d�t|d�|�}n|jd1�tj	d�|jr�|j|j�}n|GHdS(7Ni����tusages%usage: %prog [options] [year [month]]s-ws--widthtdestRKttypetinttdefaultithelps+width of date column (default 2, text only)s-ls--linestlinesis4number of lines for each week (default 1, text only)s-ss	--spacingR�is-spacing between months (default 6, text only)s-ms--monthsRLis%months per row (default 3, text only)s-cs--cssR�scalendar.csssCSS to use for page (html only)s-Ls--localeR�s.locale to be used from month and weekday namess-es
--encodingR�sEncoding to use for outputs-ts--typettexttchoicesthtmlsoutput type (text or html)s/if --locale is specified --encoding is requiredsincorrect number of argumentsReRfRsRt(R�R�( toptparsetOptionParsert
add_optionR�t
parse_argsR�R�terrorR�texitR�R�R~R�tdictR�RER�R%R&ttodayR+R�R�RRRKR�R�RLR|RdR�(	R�R�tparsertoptionsR�RnR�toptdicttresult((s /usr/lib64/python2.7/calendar.pytmainms�								
			 
		
!,

	t__main__(((ii(DRPR�R%R�R�t__all__t
ValueErrorR�RRRJR0R/RR)RRRRR$R�tTUESDAYt	WEDNESDAYtTHURSDAYtFRIDAYtSATURDAYR�RRRRtobjectR3RRR~R�R�R�RsR5RRRIRRURSRiR_t
weekheaderR	RdR
R|RR}Rt	_colwidtht_spacingRRrtEPOCHR&R�R�R
R�Rtargv(((s /usr/lib64/python2.7/calendar.pyt<module>sf	-!				
�up
#													
	\�
zfc@s&dZddlZddlZddddddd	d
gZddd
ddddddddddddddgZddd
ddddddddddddddddddd d!gZdd"dddddddd#d$dddd%gZd
d"d&d'ddddd#d$g
Zddddddd
ddd#d$dgZdd"dd
d'ddddddddg
Z	d(Z
d)Zd*d+d,gZd-Z
iZd.�Zd/efd0��YZdd1lmZd2ed2d3�efd4��YZd5ed5d6�efd7��YZded8�Zd9�Zd:d;�Zd<�Zd=�Zded>�Zd?�Zd@�ZedA�ZdB�Z ye!Wne"k
r�dC�Z#n
XdD�Z#dEZ$e%dF�e$D��Z&ej'dG�Z(dH�Z)d:d:dddI�Z+dJe,fdK��YZ-dLZ.da/d:d:dddM�Z0dS(Ns3Parse (absolute and relative) URLs.

urlparse module is based upon the following RFC specifications.

RFC 3986 (STD66): "Uniform Resource Identifiers" by T. Berners-Lee, R. Fielding
and L.  Masinter, January 2005.

RFC 2732 : "Format for Literal IPv6 Addresses in URL's by R.Hinden, B.Carpenter
and L.Masinter, December 1999.

RFC 2396:  "Uniform Resource Identifiers (URI)": Generic Syntax by T.
Berners-Lee, R. Fielding, and L. Masinter, August 1998.

RFC 2368: "The mailto URL scheme", by P.Hoffman , L Masinter, J. Zwinski, July 1998.

RFC 1808: "Relative Uniform Resource Locators", by R. Fielding, UC Irvine, June
1995.

RFC 1738: "Uniform Resource Locators (URL)" by T. Berners-Lee, L. Masinter, M.
McCahill, December 1994

RFC 3986 is considered the current standard and any future changes to
urlparse module should conform with it.  The urlparse module is
currently not entirely compliant with this RFC due to defacto
scenarios for parsing, and for backward compatibility purposes, some
parsing quirks from older RFCs are retained. The testcases in
test_urlparse.py provides a good indicator of parsing behavior.

The WHATWG URL Parser spec should also be considered.  We are not compliant with
it either due to existing user code API behavior expectations (Hyrum's Law).
It serves as a useful guide when making changes.

i����Nturlparset
urlunparseturljoint	urldefragturlsplitt
urlunsplittparse_qst	parse_qsltftpthttptgophertnntptimaptwaistfilethttpstshttptmmstprosperotrtsptrtsputtsftptsvnssvn+sshttelnettsnewstrsynctnfstgitsgit+sshthdltsiptsipstteltmailtotnewssAabcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789+-.s!	

 s	s
s
icCstj�dS(sClear the parse cache.N(t_parse_cachetclear(((s /usr/lib64/python2.7/urlparse.pytclear_cachePstResultMixincBsJeZdZed��Zed��Zed��Zed��ZRS(s-Shared methods for the parsed result objects.cCsX|j}d|krT|jdd�d}d|krP|jdd�d}n|SdS(Nt@iit:(tnetloctrsplittsplittNone(tselfR)tuserinfo((s /usr/lib64/python2.7/urlparse.pytusernameXs	cCsR|j}d|krN|jdd�d}d|krN|jdd�dSndS(NR'iiR((R)R*R+R,(R-R)R.((s /usr/lib64/python2.7/urlparse.pytpasswordbs	cCs�|jjd�d}d|krId|krI|jd�ddj�Sd|krl|jd�dj�S|dkr|dS|j�SdS(	NR'i����t[t]iiR(R(R)R+tlowerR,(R-R)((s /usr/lib64/python2.7/urlparse.pythostnamekscCs�|jjd�djd�d}d|kr}|jd�d}|r}t|d�}d|kondknrz|Sq}ndS(	NR'i����R2R(ii
ii��(R)R+tintR,(R-R)tport((s /usr/lib64/python2.7/urlparse.pyR6ws#
(t__name__t
__module__t__doc__tpropertyR/R0R4R6(((s /usr/lib64/python2.7/urlparse.pyR&Us

	(t
namedtupletSplitResults!scheme netloc path query fragmentcBseZdZd�ZRS(cCs
t|�S(N(R(R-((s /usr/lib64/python2.7/urlparse.pytgeturl�s((R7R8t	__slots__R=(((s /usr/lib64/python2.7/urlparse.pyR<�stParseResults(scheme netloc path params query fragmentcBseZdZd�ZRS(cCs
t|�S(N(R(R-((s /usr/lib64/python2.7/urlparse.pyR=�s((R7R8R>R=(((s /usr/lib64/python2.7/urlparse.pyR?�scCsst|||�}|\}}}}}|tkrTd|krTt|�\}}nd}t||||||�S(s#Parse a URL into 6 components:
    <scheme>://<netloc>/<path>;<params>?<query>#<fragment>
    Return a 6-tuple: (scheme, netloc, path, params, query, fragment).
    Note that we don't break the components up in smaller bits
    (e.g. netloc is a single string) and we don't expand % escapes.t;R(Rtuses_paramst_splitparamsR?(turltschemetallow_fragmentsttupleR)tquerytfragmenttparams((s /usr/lib64/python2.7/urlparse.pyR�scCsed|kr@|jd|jd��}|dkrO|dfSn|jd�}|| ||dfS(Nt/R@iRi(tfindtrfind(RCti((s /usr/lib64/python2.7/urlparse.pyRB�s
icCsbt|�}x>dD]6}|j||�}|dkrt||�}qqW|||!||fS(Ns/?#i(tlenRKtmin(RCtstarttdelimtctwdelim((s /usr/lib64/python2.7/urlparse.pyt_splitnetloc�s
cCs�|st|t�rdSddl}|jdd�}|jdd�}|jdd�}|jdd�}|jd|�}||kr�dSx-dD]%}||kr�td	|��q�q�WdS(
Ni����u@uu:u#u?tNFKCs/?#@:s>netloc %r contains invalid characters under NFKC normalization(t
isinstancetunicodetunicodedatatreplacet	normalizet
ValueError(R)RXtntnetloc2RR((s /usr/lib64/python2.7/urlparse.pyt_checknetloc�s
cCs'x tD]}|j|d�}qW|S(NR(t_UNSAFE_URL_BYTES_TO_REMOVERY(RCtb((s /usr/lib64/python2.7/urlparse.pyt_remove_unsafe_bytes_from_url�s
cCs5t|�}t|�}|jt�}|jt�}t|�}|||t|�t|�f}tj|d�}|r|St	t�t
kr�t�nd}}}|jd�}|dkrJ|| dkr�|| j
�}||d}|d dkrYt|d�\}}d|kr/d	|ksGd	|krYd|krYtd
��qYn|r�d|kr�|jdd�\}}nd|kr�|jdd�\}}nt|�t|||||�}	|	t|<|	Sxj|| D]}
|
tkr�Pq�q�W||d}|s-td
�|D��rJ|| j
�|}}qJn|d dkr�t|d�\}}d|kr�d	|ks�d	|kr�d|kr�td
��q�n|r�d|kr�|jdd�\}}nd|kr|jdd�\}}nt|�t|||||�}	|	t|<|	S(sParse a URL into 5 components:
    <scheme>://<netloc>/<path>?<query>#<fragment>
    Return a 5-tuple: (scheme, netloc, path, query, fragment).
    Note that we don't break the components up in smaller bits
    (e.g. netloc is a single string) and we don't expand % escapes.RR(iR	iis//R1R2sInvalid IPv6 URLt#t?css|]}|dkVqdS(t
0123456789N((t.0RR((s /usr/lib64/python2.7/urlparse.pys	<genexpr>�sN(Ratlstript_WHATWG_C0_CONTROL_OR_SPACEtstriptboolttypeR#tgetR,RNtMAX_CACHE_SIZER%RKR3RTR[R+R^R<tscheme_charstany(RCRDREtkeytcachedR)RGRHRMtvRRtrest((s /usr/lib64/python2.7/urlparse.pyR�sb!




cCsJ|\}}}}}}|r1d||f}nt|||||f�S(s�Put a parsed URL back together again.  This may result in a
    slightly different, but equivalent URL, if the URL that was parsed
    originally had redundant delimiters, e.g. a ? with an empty query
    (the draft states that these are equivalent).s%s;%s(R(tdataRDR)RCRIRGRH((s /usr/lib64/python2.7/urlparse.pyR	scCs�|\}}}}}|s=|rw|tkrw|d dkrw|r`|d dkr`d|}nd|pld|}n|r�|d|}n|r�|d|}n|r�|d|}n|S(	skCombine the elements of a tuple as returned by urlsplit() into a
    complete URL as a string. The data argument can be any five-item iterable.
    This may result in a slightly different, but equivalent URL, if the URL that
    was parsed originally had unnecessary delimiters (for example, a ? with an
    empty query; the RFC states that these are equivalent).is//iRJRR(RcRb(tuses_netloc(RsRDR)RCRGRH((s /usr/lib64/python2.7/urlparse.pyRs(
cCsh|s
|S|s|St|d|�\}}}}}}t|||�\}	}
}}}
}|	|kst|	tkrx|S|	tkr�|
r�t|	|
|||
|f�S|}
n|d dkr�t|	|
|||
|f�S|r |r |}|}|
s|}
nt|	|
|||
|f�S|jd�d |jd�}|ddkr]d|d<nxd|kr||jd�q`Wxrd}t|�d}xU||kr�||dkr�||dd	kr�||d|d5Pn|d}q�WPq�W|ddgkrd|d<n2t|�dkrC|ddkrCdg|d)nt|	|
dj|�||
|f�S(
saJoin a base URL and a possibly relative URL to form an absolute
    interpretation of the latter.RiRJi����t.s..ii����(Rs..(Rt
uses_relativeRtRR+tremoveRNtjoin(tbaseRCREtbschemetbnetloctbpathtbparamstbqueryt	bfragmentRDR)tpathRIRGRHtsegmentsRMR\((s /usr/lib64/python2.7/urlparse.pyR%sX$$		 

"cCs`d|krRt|�\}}}}}}t|||||df�}||fS|dfSdS(s�Removes any existing fragment from URL.

    Returns a tuple of the defragmented URL and the fragment.  If
    the URL contained no fragments, the second element is the
    empty string.
    RbRN(RR(RCtsR\tptatqtfragtdefrag((s /usr/lib64/python2.7/urlparse.pyRYs

cCsdS(Ni((tx((s /usr/lib64/python2.7/urlparse.pyt_is_unicodejscCs
t|t�S(N(RVRW(R�((s /usr/lib64/python2.7/urlparse.pyR�mst0123456789ABCDEFabcdefccs?|]5}tD](}||tt||d��fVq
qdS(iN(t_hexdigtchrR5(ReR�R`((s /usr/lib64/python2.7/urlparse.pys	<genexpr>vss([-]+)cCsOt|�r�d|kr|Stj|�}|dg}|j}xUtdt|�d�D];}|tt||��jd��|||d�qZWdj	|�S|jd�}t|�dkr�|S|dg}|j}x^|dD]R}y$|t
|d �||d�Wq�tk
r=|d�||�q�Xq�Wdj	|�S(s"unquote('abc%20def') -> 'abc def'.t%iiitlatin1R(R�t_asciireR+tappendtrangeRNtunquotetstrtdecodeRxt	_hextochrtKeyError(R�tbitstresR�RMtitem((s /usr/lib64/python2.7/urlparse.pyR�zs.
	#

	

cCs`i}xSt|||||�D]9\}}||krK||j|�q|g||<qW|S(s2Parse a query given as a string argument.

        Arguments:

        qs: percent-encoded query string to be parsed

        keep_blank_values: flag indicating whether blank values in
            percent-encoded queries should be treated as blank strings.
            A true value indicates that blanks should be retained as
            blank strings.  The default false value indicates that
            blank values are to be ignored and treated as if they were
            not included.

        strict_parsing: flag indicating what to do with parsing errors.
            If false (the default), errors are silently ignored.
            If true, errors raise a ValueError exception.

        max_num_fields: int. If set, then throws a ValueError if there
            are more than n fields read by parse_qsl().
    (RR�(tqstkeep_blank_valueststrict_parsingtmax_num_fieldst	separatortdicttnametvalue((s /usr/lib64/python2.7/urlparse.pyR�st_QueryStringSeparatorWarningcBseZdZRS(s>Warning for using default `separator` in parse_qs or parse_qsl(R7R8R9(((s /usr/lib64/python2.7/urlparse.pyR��ss/etc/python/urllib.cfgcCsp|st|ttf�r8|dk	r8td��nt�}|dkr�t}d}|dkr�tjj	|�}d}n|dkryt
t�}Wntk
r�qX|�Bddl
}	|	j
�}
|
j|�|
j	d|�}|aWdQXt}n|dkrbd|krYddlm}|d	d
ddd
ddtdd�nd}q�|dkrw|}q�t|�dkr�tdj||�dd��q�n|dk	r||kr�d|jd�|jd�}nd|j|�}||krtd��qn||krbg|jd�D]"}
|
jd�D]}|^qJq7}n"g|j|�D]}
|
^qr}g}x�|D]�}|r�|r�q�n|jdd�}t|�dkr|r�td|f�n|r�|jd�qq�nt|d�s|r�t|djdd��}t|djdd��}|j||f�q�q�W|S(sParse a query given as a string argument.

    Arguments:

    qs: percent-encoded query string to be parsed

    keep_blank_values: flag indicating whether blank values in
        percent-encoded queries should be treated as blank strings.  A
        true value indicates that blanks should be retained as blank
        strings.  The default false value indicates that blank values
        are to be ignored and treated as if they were  not included.

    strict_parsing: flag indicating what to do with parsing errors. If
        false (the default), errors are silently ignored. If true,
        errors raise a ValueError exception.

    max_num_fields: int. If set, then throws a ValueError if there
        are more than n fields read by parse_qsl().

    Returns a list, as G-d intended.
    s*Separator must be of type string or bytes.tPYTHON_URLLIB_QS_SEPARATORsenvironment variablei����NRR@(twarns0The default separator of urlparse.parse_qsl and s1parse_qs was changed to '&' to avoid a web cache s"poisoning issue (CVE-2021-23336). s4By default, semicolons no longer act as query field sseparators. s3See https://access.redhat.com/articles/5860431 for s
more details.t
stacklevelit&tlegacyis{} (from {}) must contain s1 character, or "legacy". See s<https://access.redhat.com/articles/5860431 for more details.sMax number of fields exceededt=sbad query field: %rRit+t (RVR�tbytesR,R[tobjectt_default_qs_separatortostenvironRktopent_QS_SEPARATOR_CONFIG_FILENAMEtEnvironmentErrortConfigParsertreadfptwarningsR�R�RNtformattcountR+R�R�RY(R�R�R�R�R�t_legacytenvvar_namet
config_sourceRR�tconfigR�t
num_fieldsts1ts2tpairstrt
name_valuetnvR�R�((s /usr/lib64/python2.7/urlparse.pyR�st)		

			##;"
(1R9treR�t__all__RvRtRAtnon_hierarchicalt
uses_queryt
uses_fragmentRmRgR_RlR#R%R�R&tcollectionsR;R<R?tTrueRRBRTR^RaRRRRRRWt	NameErrorR�R�R�R�tcompileR�R�R,RtRuntimeWarningR�R�R�R(((s /usr/lib64/python2.7/urlparse.pyt<module>!sv	.""				=	
	4	

		
			�
zfc@sfdZddlZejded�ddd��YZddd�Zd	�Zd
ZdZdZ	dS(
s:Extended file operations available in POSIX.

f = posixfile.open(filename, [mode, [bufsize]])
      will create a new posixfile object

f = posixfile.fileopen(fileobject)
      will create a posixfile object from a builtin file object

f.file()
      will return the original builtin file object

f.dup()
      will return a new file object based on a new filedescriptor

f.dup2(fd)
      will return a new file object based on the given filedescriptor

f.flags(mode)
      will turn on the associated flag (merge)
      mode can contain the following characters:

  (character representing a flag)
      a       append only flag
      c       close on exec flag
      n       no delay flag
      s       synchronization flag
  (modifiers)
      !       turn flags 'off' instead of default 'on'
      =       copy flags 'as is' instead of default 'merge'
      ?       return a string in which the characters represent the flags
              that are set

      note: - the '!' and '=' modifiers are mutually exclusive.
            - the '?' modifier will return the status of the flags after they
              have been changed by other characters in the mode string

f.lock(mode [, len [, start [, whence]]])
      will (un)lock a region
      mode can contain the following characters:

  (character representing type of lock)
      u       unlock
      r       read lock
      w       write lock
  (modifiers)
      |       wait until the lock can be granted
      ?       return the first lock conflicting with the requested lock
              or 'None' if there is no conflict. The lock returned is in the
              format (mode, len, start, whence, pid) where mode is a
              character representing the type of lock ('r' or 'w')

      note: - the '?' modifier prevents a region from being locked; it is
              query only
i����NsIThe posixfile module is deprecated; fcntl.lockf() provides better lockingit_posixfile_cBsheZdZddgZd�Zddd�Zd�Zd�Zd	�Zd
�Z	d�Z
d�ZRS(
s;File wrapper class that provides extra POSIX file routines.topentclosedcCs=|j}d|j|j|j|jtt|��dfS(Ns$<%s posixfile '%s', mode '%s' at %s>i(t_file_tstatesRtnametmodethextid(tselftfile((s!/usr/lib64/python2.7/posixfile.pyt__repr__Cs	tri����cCs(ddl}|j|j|||��S(Ni����(t__builtin__tfileopenR(R	RRtbufsizeR
((s!/usr/lib64/python2.7/posixfile.pyRLscCs�ddl}tt|��dkr0td�n||_xZt|�D]L}|jd�sFt||�}t||j	�r�t
|||�q�qFqFW|S(Ni����s
<type 'file'>s,posixfile.fileopen() arg must be file objectt_(ttypestreprttypet	TypeErrorRtdirt
startswithtgetattrt
isinstancetBuiltinMethodTypetsetattr(R	R
Rtmaybemethodtattr((s!/usr/lib64/python2.7/posixfile.pyRPs	cCs|jS(N(R(R	((s!/usr/lib64/python2.7/posixfile.pyR
`scCsOddl}t|d�s'td�n|j|j|jj��|jj�S(Ni����tfdopensdup() method unavailable(tposixthasattrtAttributeErrorRtdupRtfilenoR(R	R((s!/usr/lib64/python2.7/posixfile.pyR!cscCsVddl}t|d�s'td�n|j|jj�|�|j||jj�S(Ni����Rsdup() method unavailable(RRR tdup2RR"RR(R	tfdR((s!/usr/lib64/python2.7/posixfile.pyR#ks
cGsddl}ddl}|rIt|�dkr<td�n|d}nd}d}d|krq||jB}nd|kr�||jB}nd|kr�||jB}n|j}d	|kr|j|j�|j	d�}d
|kr�||@}q||B}n|j|j�|j
|�}d|krZd
|k}|j|j�|j|�}nd|krd}|j|j�|j	d�}|j|@r�|d}n|j|j�|jd�d@r�|d}n|j|@r�|d}n|j|@r|d}n|SdS(
Ni����isToo many argumentsit?tntatst=t!tct(
tfcntltostlenRtO_NDELAYtO_APPENDtO_SYNCRR"tF_GETFLtF_SETFLtF_SETFDtF_GETFD(R	twhichR-R.tl_flagsR
tcur_fltarg((s!/usr/lib64/python2.7/posixfile.pytflagstsF
	
!

"




c
Gsddl}ddl}d|kr0|j}n9d|krH|j}n!d|kr`|j}n	td�d|kr�|j}n!d|kr�|j}n	|j}d}d}d}	t	|�d	kr�|d}	nct	|�d
kr�|\}	}nBt	|�dkr|\}	}}nt	|�dkr6td�nddl
}
ddl}|
jdkr�|j
d||	|j�||�}nW|
jd kr�|j
d||||	ddd�}n!|j
d||||	dd�}|j|jj�||�}d|kr|
jd!kr8|jd|�\}}	}
}}n�|
jd"krq|jd|�\}}}}	}}
}nZ|
jdkr�|jd|�\}}}}	}
}n$|jd|�\}}}}	}}
||jkr||jkr�d|	|||
fSd|	|||
fSqndS(#Ni����twRtusno type of lock specifiedt|R%iiiistoo many argumentstnetbsd1topenbsd2tfreebsd2tfreebsd3tfreebsd4tfreebsd5tfreebsd6tfreebsd7tfreebsd8tbsdos2tbsdos3tbsdos4t
lxxxxlxxxxlhhtaix3taix4thhllliithhllhhtlinux2(R?R@RARBRCRDRERFRGRHRIRJ(RLRM(	R?R@RARBRCRDRHRIRJ(RLRM(tstructR-tF_WRLCKtF_RDLCKtF_UNLCKRtF_SETLKWtF_GETLKtF_SETLKR/tsysR.tplatformtpacktgetpidRR"tunpack(R	thowtargsRQR-tl_typetcmdtl_whencetl_starttl_lenRXR.tflocktl_pidtl_sysidtl_vfs((s!/usr/lib64/python2.7/posixfile.pytlock�sj		
					$*'$(t__name__t
__module__t__doc__RRRRR
R!R#R;Rh(((s!/usr/lib64/python2.7/posixfile.pyR;s								%RcCst�j|||�S(s4Public routine to open a file as a posixfile object.(RR(RRR((s!/usr/lib64/python2.7/posixfile.pyR�scCst�j|�S(sCPublic routine to get a posixfile object from a Python file object.(RR(R
((s!/usr/lib64/python2.7/posixfile.pyR�sii((
RktwarningstwarntDeprecationWarningRRRtSEEK_SETtSEEK_CURtSEEK_END(((s!/usr/lib64/python2.7/posixfile.pyt<module>6s	
�	#! /usr/bin/python2.7
"""An RFC 2821 smtp proxy.

Usage: %(program)s [options] [localhost:localport [remotehost:remoteport]]

Options:

    --nosetuid
    -n
        This program generally tries to setuid `nobody', unless this flag is
        set.  The setuid call will fail if this program is not run as root (in
        which case, use this flag).

    --version
    -V
        Print the version number and exit.

    --class classname
    -c classname
        Use `classname' as the concrete SMTP proxy class.  Uses `PureProxy' by
        default.

    --debug
    -d
        Turn on debugging prints.

    --help
    -h
        Print this message and exit.

Version: %(__version__)s

If localhost is not given then `localhost' is used, and if localport is not
given then 8025 is used.  If remotehost is not given then `localhost' is used,
and if remoteport is not given, then 25 is used.
"""

# Overview:
#
# This file implements the minimal SMTP protocol as defined in RFC 821.  It
# has a hierarchy of classes which implement the backend functionality for the
# smtpd.  A number of classes are provided:
#
#   SMTPServer - the base class for the backend.  Raises NotImplementedError
#   if you try to use it.
#
#   DebuggingServer - simply prints each message it receives on stdout.
#
#   PureProxy - Proxies all messages to a real smtpd which does final
#   delivery.  One known problem with this class is that it doesn't handle
#   SMTP errors from the backend server at all.  This should be fixed
#   (contributions are welcome!).
#
#   MailmanProxy - An experimental hack to work with GNU Mailman
#   <www.list.org>.  Using this server as your real incoming smtpd, your
#   mailhost will automatically recognize and accept mail destined to Mailman
#   lists when those lists are created.  Every message not destined for a list
#   gets forwarded to a real backend smtpd, as with PureProxy.  Again, errors
#   are not handled correctly yet.
#
# Please note that this script requires Python 2.0
#
# Author: Barry Warsaw <barry@python.org>
#
# TODO:
#
# - support mailbox delivery
# - alias files
# - ESMTP
# - handle error codes from the backend smtpd

import sys
import os
import errno
import getopt
import time
import socket
import asyncore
import asynchat

__all__ = ["SMTPServer","DebuggingServer","PureProxy","MailmanProxy"]

program = sys.argv[0]
__version__ = 'Python SMTP proxy version 0.2'


class Devnull:
    def write(self, msg): pass
    def flush(self): pass


DEBUGSTREAM = Devnull()
NEWLINE = '\n'
EMPTYSTRING = ''
COMMASPACE = ', '


def usage(code, msg=''):
    print >> sys.stderr, __doc__ % globals()
    if msg:
        print >> sys.stderr, msg
    sys.exit(code)


class SMTPChannel(asynchat.async_chat):
    COMMAND = 0
    DATA = 1

    def __init__(self, server, conn, addr):
        asynchat.async_chat.__init__(self, conn)
        self.__server = server
        self.__conn = conn
        self.__addr = addr
        self.__line = []
        self.__state = self.COMMAND
        self.__greeting = 0
        self.__mailfrom = None
        self.__rcpttos = []
        self.__data = ''
        self.__fqdn = socket.getfqdn()
        try:
            self.__peer = conn.getpeername()
        except socket.error, err:
            # a race condition  may occur if the other end is closing
            # before we can get the peername
            self.close()
            if err[0] != errno.ENOTCONN:
                raise
            return
        print >> DEBUGSTREAM, 'Peer:', repr(self.__peer)
        self.push('220 %s %s' % (self.__fqdn, __version__))
        self.set_terminator('\r\n')

    # Overrides base class for convenience
    def push(self, msg):
        asynchat.async_chat.push(self, msg + '\r\n')

    # Implementation of base class abstract method
    def collect_incoming_data(self, data):
        self.__line.append(data)

    # Implementation of base class abstract method
    def found_terminator(self):
        line = EMPTYSTRING.join(self.__line)
        print >> DEBUGSTREAM, 'Data:', repr(line)
        self.__line = []
        if self.__state == self.COMMAND:
            if not line:
                self.push('500 Error: bad syntax')
                return
            method = None
            i = line.find(' ')
            if i < 0:
                command = line.upper()
                arg = None
            else:
                command = line[:i].upper()
                arg = line[i+1:].strip()
            method = getattr(self, 'smtp_' + command, None)
            if not method:
                self.push('502 Error: command "%s" not implemented' % command)
                return
            method(arg)
            return
        else:
            if self.__state != self.DATA:
                self.push('451 Internal confusion')
                return
            # Remove extraneous carriage returns and de-transparency according
            # to RFC 821, Section 4.5.2.
            data = []
            for text in line.split('\r\n'):
                if text and text[0] == '.':
                    data.append(text[1:])
                else:
                    data.append(text)
            self.__data = NEWLINE.join(data)
            status = self.__server.process_message(self.__peer,
                                                   self.__mailfrom,
                                                   self.__rcpttos,
                                                   self.__data)
            self.__rcpttos = []
            self.__mailfrom = None
            self.__state = self.COMMAND
            self.set_terminator('\r\n')
            if not status:
                self.push('250 Ok')
            else:
                self.push(status)

    # SMTP and ESMTP commands
    def smtp_HELO(self, arg):
        if not arg:
            self.push('501 Syntax: HELO hostname')
            return
        if self.__greeting:
            self.push('503 Duplicate HELO/EHLO')
        else:
            self.__greeting = arg
            self.push('250 %s' % self.__fqdn)

    def smtp_NOOP(self, arg):
        if arg:
            self.push('501 Syntax: NOOP')
        else:
            self.push('250 Ok')

    def smtp_QUIT(self, arg):
        # args is ignored
        self.push('221 Bye')
        self.close_when_done()

    # factored
    def __getaddr(self, keyword, arg):
        address = None
        keylen = len(keyword)
        if arg[:keylen].upper() == keyword:
            address = arg[keylen:].strip()
            if not address:
                pass
            elif address[0] == '<' and address[-1] == '>' and address != '<>':
                # Addresses can be in the form <person@dom.com> but watch out
                # for null address, e.g. <>
                address = address[1:-1]
        return address

    def smtp_MAIL(self, arg):
        print >> DEBUGSTREAM, '===> MAIL', arg
        address = self.__getaddr('FROM:', arg) if arg else None
        if not address:
            self.push('501 Syntax: MAIL FROM:<address>')
            return
        if self.__mailfrom:
            self.push('503 Error: nested MAIL command')
            return
        self.__mailfrom = address
        print >> DEBUGSTREAM, 'sender:', self.__mailfrom
        self.push('250 Ok')

    def smtp_RCPT(self, arg):
        print >> DEBUGSTREAM, '===> RCPT', arg
        if not self.__mailfrom:
            self.push('503 Error: need MAIL command')
            return
        address = self.__getaddr('TO:', arg) if arg else None
        if not address:
            self.push('501 Syntax: RCPT TO: <address>')
            return
        self.__rcpttos.append(address)
        print >> DEBUGSTREAM, 'recips:', self.__rcpttos
        self.push('250 Ok')

    def smtp_RSET(self, arg):
        if arg:
            self.push('501 Syntax: RSET')
            return
        # Resets the sender, recipients, and data, but not the greeting
        self.__mailfrom = None
        self.__rcpttos = []
        self.__data = ''
        self.__state = self.COMMAND
        self.push('250 Ok')

    def smtp_DATA(self, arg):
        if not self.__rcpttos:
            self.push('503 Error: need RCPT command')
            return
        if arg:
            self.push('501 Syntax: DATA')
            return
        self.__state = self.DATA
        self.set_terminator('\r\n.\r\n')
        self.push('354 End data with <CR><LF>.<CR><LF>')


class SMTPServer(asyncore.dispatcher):
    def __init__(self, localaddr, remoteaddr):
        self._localaddr = localaddr
        self._remoteaddr = remoteaddr
        asyncore.dispatcher.__init__(self)
        try:
            self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
            # try to re-use a server port if possible
            self.set_reuse_addr()
            self.bind(localaddr)
            self.listen(5)
        except:
            # cleanup asyncore.socket_map before raising
            self.close()
            raise
        else:
            print >> DEBUGSTREAM, \
                  '%s started at %s\n\tLocal addr: %s\n\tRemote addr:%s' % (
                self.__class__.__name__, time.ctime(time.time()),
                localaddr, remoteaddr)

    def handle_accept(self):
        pair = self.accept()
        if pair is not None:
            conn, addr = pair
            print >> DEBUGSTREAM, 'Incoming connection from %s' % repr(addr)
            channel = SMTPChannel(self, conn, addr)

    # API for "doing something useful with the message"
    def process_message(self, peer, mailfrom, rcpttos, data):
        """Override this abstract method to handle messages from the client.

        peer is a tuple containing (ipaddr, port) of the client that made the
        socket connection to our smtp port.

        mailfrom is the raw address the client claims the message is coming
        from.

        rcpttos is a list of raw addresses the client wishes to deliver the
        message to.

        data is a string containing the entire full text of the message,
        headers (if supplied) and all.  It has been `de-transparencied'
        according to RFC 821, Section 4.5.2.  In other words, a line
        containing a `.' followed by other text has had the leading dot
        removed.

        This function should return None, for a normal `250 Ok' response;
        otherwise it returns the desired response string in RFC 821 format.

        """
        raise NotImplementedError


class DebuggingServer(SMTPServer):
    # Do something with the gathered message
    def process_message(self, peer, mailfrom, rcpttos, data):
        inheaders = 1
        lines = data.split('\n')
        print '---------- MESSAGE FOLLOWS ----------'
        for line in lines:
            # headers first
            if inheaders and not line:
                print 'X-Peer:', peer[0]
                inheaders = 0
            print line
        print '------------ END MESSAGE ------------'


class PureProxy(SMTPServer):
    def process_message(self, peer, mailfrom, rcpttos, data):
        lines = data.split('\n')
        # Look for the last header
        i = 0
        for line in lines:
            if not line:
                break
            i += 1
        lines.insert(i, 'X-Peer: %s' % peer[0])
        data = NEWLINE.join(lines)
        refused = self._deliver(mailfrom, rcpttos, data)
        # TBD: what to do with refused addresses?
        print >> DEBUGSTREAM, 'we got some refusals:', refused

    def _deliver(self, mailfrom, rcpttos, data):
        import smtplib
        refused = {}
        try:
            s = smtplib.SMTP()
            s.connect(self._remoteaddr[0], self._remoteaddr[1])
            try:
                refused = s.sendmail(mailfrom, rcpttos, data)
            finally:
                s.quit()
        except smtplib.SMTPRecipientsRefused, e:
            print >> DEBUGSTREAM, 'got SMTPRecipientsRefused'
            refused = e.recipients
        except (socket.error, smtplib.SMTPException), e:
            print >> DEBUGSTREAM, 'got', e.__class__
            # All recipients were refused.  If the exception had an associated
            # error code, use it.  Otherwise,fake it with a non-triggering
            # exception code.
            errcode = getattr(e, 'smtp_code', -1)
            errmsg = getattr(e, 'smtp_error', 'ignore')
            for r in rcpttos:
                refused[r] = (errcode, errmsg)
        return refused


class MailmanProxy(PureProxy):
    def process_message(self, peer, mailfrom, rcpttos, data):
        from cStringIO import StringIO
        from Mailman import Utils
        from Mailman import Message
        from Mailman import MailList
        # If the message is to a Mailman mailing list, then we'll invoke the
        # Mailman script directly, without going through the real smtpd.
        # Otherwise we'll forward it to the local proxy for disposition.
        listnames = []
        for rcpt in rcpttos:
            local = rcpt.lower().split('@')[0]
            # We allow the following variations on the theme
            #   listname
            #   listname-admin
            #   listname-owner
            #   listname-request
            #   listname-join
            #   listname-leave
            parts = local.split('-')
            if len(parts) > 2:
                continue
            listname = parts[0]
            if len(parts) == 2:
                command = parts[1]
            else:
                command = ''
            if not Utils.list_exists(listname) or command not in (
                    '', 'admin', 'owner', 'request', 'join', 'leave'):
                continue
            listnames.append((rcpt, listname, command))
        # Remove all list recipients from rcpttos and forward what we're not
        # going to take care of ourselves.  Linear removal should be fine
        # since we don't expect a large number of recipients.
        for rcpt, listname, command in listnames:
            rcpttos.remove(rcpt)
        # If there's any non-list destined recipients left,
        print >> DEBUGSTREAM, 'forwarding recips:', ' '.join(rcpttos)
        if rcpttos:
            refused = self._deliver(mailfrom, rcpttos, data)
            # TBD: what to do with refused addresses?
            print >> DEBUGSTREAM, 'we got refusals:', refused
        # Now deliver directly to the list commands
        mlists = {}
        s = StringIO(data)
        msg = Message.Message(s)
        # These headers are required for the proper execution of Mailman.  All
        # MTAs in existence seem to add these if the original message doesn't
        # have them.
        if not msg.getheader('from'):
            msg['From'] = mailfrom
        if not msg.getheader('date'):
            msg['Date'] = time.ctime(time.time())
        for rcpt, listname, command in listnames:
            print >> DEBUGSTREAM, 'sending message to', rcpt
            mlist = mlists.get(listname)
            if not mlist:
                mlist = MailList.MailList(listname, lock=0)
                mlists[listname] = mlist
            # dispatch on the type of command
            if command == '':
                # post
                msg.Enqueue(mlist, tolist=1)
            elif command == 'admin':
                msg.Enqueue(mlist, toadmin=1)
            elif command == 'owner':
                msg.Enqueue(mlist, toowner=1)
            elif command == 'request':
                msg.Enqueue(mlist, torequest=1)
            elif command in ('join', 'leave'):
                # TBD: this is a hack!
                if command == 'join':
                    msg['Subject'] = 'subscribe'
                else:
                    msg['Subject'] = 'unsubscribe'
                msg.Enqueue(mlist, torequest=1)


class Options:
    setuid = 1
    classname = 'PureProxy'


def parseargs():
    global DEBUGSTREAM
    try:
        opts, args = getopt.getopt(
            sys.argv[1:], 'nVhc:d',
            ['class=', 'nosetuid', 'version', 'help', 'debug'])
    except getopt.error, e:
        usage(1, e)

    options = Options()
    for opt, arg in opts:
        if opt in ('-h', '--help'):
            usage(0)
        elif opt in ('-V', '--version'):
            print >> sys.stderr, __version__
            sys.exit(0)
        elif opt in ('-n', '--nosetuid'):
            options.setuid = 0
        elif opt in ('-c', '--class'):
            options.classname = arg
        elif opt in ('-d', '--debug'):
            DEBUGSTREAM = sys.stderr

    # parse the rest of the arguments
    if len(args) < 1:
        localspec = 'localhost:8025'
        remotespec = 'localhost:25'
    elif len(args) < 2:
        localspec = args[0]
        remotespec = 'localhost:25'
    elif len(args) < 3:
        localspec = args[0]
        remotespec = args[1]
    else:
        usage(1, 'Invalid arguments: %s' % COMMASPACE.join(args))

    # split into host/port pairs
    i = localspec.find(':')
    if i < 0:
        usage(1, 'Bad local spec: %s' % localspec)
    options.localhost = localspec[:i]
    try:
        options.localport = int(localspec[i+1:])
    except ValueError:
        usage(1, 'Bad local port: %s' % localspec)
    i = remotespec.find(':')
    if i < 0:
        usage(1, 'Bad remote spec: %s' % remotespec)
    options.remotehost = remotespec[:i]
    try:
        options.remoteport = int(remotespec[i+1:])
    except ValueError:
        usage(1, 'Bad remote port: %s' % remotespec)
    return options


if __name__ == '__main__':
    options = parseargs()
    # Become nobody
    classname = options.classname
    if "." in classname:
        lastdot = classname.rfind(".")
        mod = __import__(classname[:lastdot], globals(), locals(), [""])
        classname = classname[lastdot+1:]
    else:
        import __main__ as mod
    class_ = getattr(mod, classname)
    proxy = class_((options.localhost, options.localport),
                   (options.remotehost, options.remoteport))
    if options.setuid:
        try:
            import pwd
        except ImportError:
            print >> sys.stderr, \
                  'Cannot import module "pwd"; try running with -n option.'
            sys.exit(1)
        nobody = pwd.getpwnam('nobody')[2]
        try:
            os.setuid(nobody)
        except OSError, e:
            if e.errno != errno.EPERM: raise
            print >> sys.stderr, \
                  'Cannot setuid "nobody"; try running with -n option.'
            sys.exit(1)
    try:
        asyncore.loop()
    except KeyboardInterrupt:
        pass
�
zfc@s�dZddlmZeddd�[ddlZddlZddlZddlZddgZiZej	�Z
e
jd	d
�e
ed<ej	�Z
e
jdd
�e
ed
<ej	�Z
e
jdd
�e
ed<ej	�Z
e
jdd
�e
ed<ej	�Z
e
jdd
�e
ed<ej	�Z
e
jdd
�e
ed<ej	�Z
e
jdd
�e
ed<ej	�Zejdd
�de
fd��YZd�Zd�ZdS(s�Convert "arbitrary" sound files to AIFF (Apple and SGI's audio format).

Input may be compressed.
Uncompressed file type may be AIFF, WAV, VOC, 8SVX, NeXT/Sun, and others.
An exception is raised if the file is not of a recognized type.
Returned filename is either the input filename or a temporary filename;
in the latter case the caller must ensure that it is removed.
Other temporary files used are removed by the function.
i����(twarnpy3ks0the toaiff module has been removed in Python 3.0t
stackleveliNterrorttoaiffssox -t au - -t aiff -r 8000 -s--taus sox -t hcom - -t aiff -r 22050 -thcomssox -t voc - -t aiff -r 11025 -tvocssox -t wav - -t aiff -twavs sox -t 8svx - -t aiff -r 16000 -t8svxs sox -t sndt - -t aiff -r 16000 -tsndts sox -t sndr - -t aiff -r 16000 -tsndrt
uncompresscBseZRS((t__name__t
__module__(((s/usr/lib64/python2.7/toaiff.pyR=sc	Cs}g}d}zt||�}WdxS|D]J}||kr*ytj|�Wntjk
rcnX|j|�q*q*WX|S(N(tNonet_toaifftostunlinkRtremove(tfilenamettempstretttemp((s/usr/lib64/python2.7/toaiff.pyR@scCs�|ddkrgtj�\}}tj|�|j|�tj||�}|rmt|d�qmn|}y&tj	|�}|r�|d}nWn�t
k
rO}t|�td�krt|�dkrt|d�td�krt|d�td�kr|d}nt|�td�kr;t
|�}nt|d|�nX|d	kr`|S|dksx|tkr�td
||f�ntj�\}}tj|�|j|�t|j||�}|r�t|d�n|S(
Ni����s.Zs: uncompress failediiits: taiffs"%s: unsupported audio file type %rs: conversion to aiff failed((ttempfiletmkstempRtclosetappendRtcopyRtsndhdrtwhathdrtIOErrorttypetlentreprRttable(RRtfdtfnametststftypetmsgR((s/usr/lib64/python2.7/toaiff.pyROs<

*8


(t__doc__twarningsRRRtpipesRt__all__R$tTemplatettRRt	ExceptionRRR(((s/usr/lib64/python2.7/toaiff.pyt<module>	sF






	"""Open an arbitrary URL.

See the following document for more info on URLs:
"Names and Addresses, URIs, URLs, URNs, URCs", at
http://www.w3.org/pub/WWW/Addressing/Overview.html

See also the HTTP spec (from which the error codes are derived):
"HTTP - Hypertext Transfer Protocol", at
http://www.w3.org/pub/WWW/Protocols/

Related standards and specs:
- RFC1808: the "relative URL" spec. (authoritative status)
- RFC1738 - the "URL standard". (authoritative status)
- RFC1630 - the "URI spec". (informational status)

The object returned by URLopener().open(file) will differ per
protocol.  All you know is that is has methods read(), readline(),
readlines(), fileno(), close() and info().  The read*(), fileno()
and close() methods work like those of open files.
The info() method returns a mimetools.Message object which can be
used to query various info about the object, if available.
(mimetools.Message objects are queried with the getheader() method.)
"""

import string
import socket
import os
import time
import sys
import base64
import re

from urlparse import urljoin as basejoin

__all__ = ["urlopen", "URLopener", "FancyURLopener", "urlretrieve",
           "urlcleanup", "quote", "quote_plus", "unquote", "unquote_plus",
           "urlencode", "url2pathname", "pathname2url", "splittag",
           "localhost", "thishost", "ftperrors", "basejoin", "unwrap",
           "splittype", "splithost", "splituser", "splitpasswd", "splitport",
           "splitnport", "splitquery", "splitattr", "splitvalue",
           "getproxies"]

__version__ = '1.17'    # XXX This version is not always updated :-(

MAXFTPCACHE = 10        # Trim the ftp cache beyond this size

# Helper for non-unix systems
if os.name == 'nt':
    from nturl2path import url2pathname, pathname2url
elif os.name == 'riscos':
    from rourl2path import url2pathname, pathname2url
else:
    def url2pathname(pathname):
        """OS-specific conversion from a relative URL of the 'file' scheme
        to a file system path; not recommended for general use."""
        return unquote(pathname)

    def pathname2url(pathname):
        """OS-specific conversion from a file system path to a relative URL
        of the 'file' scheme; not recommended for general use."""
        return quote(pathname)

# This really consists of two pieces:
# (1) a class which handles opening of all sorts of URLs
#     (plus assorted utilities etc.)
# (2) a set of functions for parsing URLs
# XXX Should these be separated out into different modules?


# Shortcut for basic usage
_urlopener = None
def urlopen(url, data=None, proxies=None, context=None):
    """Create a file-like object for the specified URL to read from."""
    from warnings import warnpy3k
    warnpy3k("urllib.urlopen() has been removed in Python 3.0 in "
             "favor of urllib2.urlopen()", stacklevel=2)

    global _urlopener
    if proxies is not None or context is not None:
        opener = FancyURLopener(proxies=proxies, context=context)
    elif not _urlopener:
        opener = FancyURLopener()
        _urlopener = opener
    else:
        opener = _urlopener
    if data is None:
        return opener.open(url)
    else:
        return opener.open(url, data)
def urlretrieve(url, filename=None, reporthook=None, data=None, context=None):
    global _urlopener
    if context is not None:
        opener = FancyURLopener(context=context)
    elif not _urlopener:
        _urlopener = opener = FancyURLopener()
    else:
        opener = _urlopener
    return opener.retrieve(url, filename, reporthook, data)
def urlcleanup():
    if _urlopener:
        _urlopener.cleanup()
    _safe_quoters.clear()
    ftpcache.clear()

# check for SSL
try:
    import ssl
except:
    _have_ssl = False
else:
    _have_ssl = True

# exception raised when downloaded size does not match content-length
class ContentTooShortError(IOError):
    def __init__(self, message, content):
        IOError.__init__(self, message)
        self.content = content

ftpcache = {}
class URLopener:
    """Class to open URLs.
    This is a class rather than just a subroutine because we may need
    more than one set of global protocol-specific options.
    Note -- this is a base class for those who don't want the
    automatic handling of errors type 302 (relocated) and 401
    (authorization needed)."""

    __tempfiles = None

    version = "Python-urllib/%s" % __version__

    # Constructor
    def __init__(self, proxies=None, context=None, **x509):
        if proxies is None:
            proxies = getproxies()
        assert hasattr(proxies, 'has_key'), "proxies must be a mapping"
        self.proxies = proxies
        self.key_file = x509.get('key_file')
        self.cert_file = x509.get('cert_file')
        self.context = context
        self.addheaders = [('User-Agent', self.version), ('Accept', '*/*')]
        self.__tempfiles = []
        self.__unlink = os.unlink # See cleanup()
        self.tempcache = None
        # Undocumented feature: if you assign {} to tempcache,
        # it is used to cache files retrieved with
        # self.retrieve().  This is not enabled by default
        # since it does not work for changing documents (and I
        # haven't got the logic to check expiration headers
        # yet).
        self.ftpcache = ftpcache
        # Undocumented feature: you can use a different
        # ftp cache by assigning to the .ftpcache member;
        # in case you want logically independent URL openers
        # XXX This is not threadsafe.  Bah.

    def __del__(self):
        self.close()

    def close(self):
        self.cleanup()

    def cleanup(self):
        # This code sometimes runs when the rest of this module
        # has already been deleted, so it can't use any globals
        # or import anything.
        if self.__tempfiles:
            for file in self.__tempfiles:
                try:
                    self.__unlink(file)
                except OSError:
                    pass
            del self.__tempfiles[:]
        if self.tempcache:
            self.tempcache.clear()

    def addheader(self, *args):
        """Add a header to be used by the HTTP interface only
        e.g. u.addheader('Accept', 'sound/basic')"""
        self.addheaders.append(args)

    # External interface
    def open(self, fullurl, data=None):
        """Use URLopener().open(file) instead of open(file, 'r')."""
        fullurl = unwrap(toBytes(fullurl))
        # percent encode url, fixing lame server errors for e.g, like space
        # within url paths.
        fullurl = quote(fullurl, safe="%/:=&?~#+!$,;'@()*[]|")
        if self.tempcache and fullurl in self.tempcache:
            filename, headers = self.tempcache[fullurl]
            fp = open(filename, 'rb')
            return addinfourl(fp, headers, fullurl)
        urltype, url = splittype(fullurl)
        if not urltype:
            urltype = 'file'
        if urltype in self.proxies:
            proxy = self.proxies[urltype]
            urltype, proxyhost = splittype(proxy)
            host, selector = splithost(proxyhost)
            url = (host, fullurl) # Signal special case to open_*()
        else:
            proxy = None
        name = 'open_' + urltype
        self.type = urltype
        name = name.replace('-', '_')

        # bpo-35907: disallow the file reading with the type not allowed
        if not hasattr(self, name) or name == 'open_local_file':
            if proxy:
                return self.open_unknown_proxy(proxy, fullurl, data)
            else:
                return self.open_unknown(fullurl, data)
        try:
            if data is None:
                return getattr(self, name)(url)
            else:
                return getattr(self, name)(url, data)
        except socket.error, msg:
            raise IOError, ('socket error', msg), sys.exc_info()[2]

    def open_unknown(self, fullurl, data=None):
        """Overridable interface to open unknown URL type."""
        type, url = splittype(fullurl)
        raise IOError, ('url error', 'unknown url type', type)

    def open_unknown_proxy(self, proxy, fullurl, data=None):
        """Overridable interface to open unknown URL type."""
        type, url = splittype(fullurl)
        raise IOError, ('url error', 'invalid proxy for %s' % type, proxy)

    # External interface
    def retrieve(self, url, filename=None, reporthook=None, data=None):
        """retrieve(url) returns (filename, headers) for a local object
        or (tempfilename, headers) for a remote object."""
        url = unwrap(toBytes(url))
        if self.tempcache and url in self.tempcache:
            return self.tempcache[url]
        type, url1 = splittype(url)
        if filename is None and (not type or type == 'file'):
            try:
                fp = self.open_local_file(url1)
                hdrs = fp.info()
                fp.close()
                return url2pathname(splithost(url1)[1]), hdrs
            except IOError:
                pass
        fp = self.open(url, data)
        try:
            headers = fp.info()
            if filename:
                tfp = open(filename, 'wb')
            else:
                import tempfile
                garbage, path = splittype(url)
                garbage, path = splithost(path or "")
                path, garbage = splitquery(path or "")
                path, garbage = splitattr(path or "")
                suffix = os.path.splitext(path)[1]
                (fd, filename) = tempfile.mkstemp(suffix)
                self.__tempfiles.append(filename)
                tfp = os.fdopen(fd, 'wb')
            try:
                result = filename, headers
                if self.tempcache is not None:
                    self.tempcache[url] = result
                bs = 1024*8
                size = -1
                read = 0
                blocknum = 0
                if "content-length" in headers:
                    size = int(headers["Content-Length"])
                if reporthook:
                    reporthook(blocknum, bs, size)
                while 1:
                    block = fp.read(bs)
                    if block == "":
                        break
                    read += len(block)
                    tfp.write(block)
                    blocknum += 1
                    if reporthook:
                        reporthook(blocknum, bs, size)
            finally:
                tfp.close()
        finally:
            fp.close()

        # raise exception if actual size does not match content-length header
        if size >= 0 and read < size:
            raise ContentTooShortError("retrieval incomplete: got only %i out "
                                       "of %i bytes" % (read, size), result)

        return result

    # Each method named open_<type> knows how to open that type of URL

    def open_http(self, url, data=None):
        """Use HTTP protocol."""
        import httplib
        user_passwd = None
        proxy_passwd= None
        if isinstance(url, str):
            host, selector = splithost(url)
            if host:
                user_passwd, host = splituser(host)
                host = unquote(host)
            realhost = host
        else:
            host, selector = url
            # check whether the proxy contains authorization information
            proxy_passwd, host = splituser(host)
            # now we proceed with the url we want to obtain
            urltype, rest = splittype(selector)
            url = rest
            user_passwd = None
            if urltype.lower() != 'http':
                realhost = None
            else:
                realhost, rest = splithost(rest)
                if realhost:
                    user_passwd, realhost = splituser(realhost)
                if user_passwd:
                    selector = "%s://%s%s" % (urltype, realhost, rest)
                if proxy_bypass(realhost):
                    host = realhost

            #print "proxy via http:", host, selector
        if not host: raise IOError, ('http error', 'no host given')

        if proxy_passwd:
            proxy_passwd = unquote(proxy_passwd)
            proxy_auth = base64.b64encode(proxy_passwd).strip()
        else:
            proxy_auth = None

        if user_passwd:
            user_passwd = unquote(user_passwd)
            auth = base64.b64encode(user_passwd).strip()
        else:
            auth = None
        h = httplib.HTTP(host)
        if data is not None:
            h.putrequest('POST', selector)
            h.putheader('Content-Type', 'application/x-www-form-urlencoded')
            h.putheader('Content-Length', '%d' % len(data))
        else:
            h.putrequest('GET', selector)
        if proxy_auth: h.putheader('Proxy-Authorization', 'Basic %s' % proxy_auth)
        if auth: h.putheader('Authorization', 'Basic %s' % auth)
        if realhost: h.putheader('Host', realhost)
        for args in self.addheaders: h.putheader(*args)
        h.endheaders(data)
        errcode, errmsg, headers = h.getreply()
        fp = h.getfile()
        if errcode == -1:
            if fp: fp.close()
            # something went wrong with the HTTP status line
            raise IOError, ('http protocol error', 0,
                            'got a bad status line', None)
        # According to RFC 2616, "2xx" code indicates that the client's
        # request was successfully received, understood, and accepted.
        if (200 <= errcode < 300):
            return addinfourl(fp, headers, "http:" + url, errcode)
        else:
            if data is None:
                return self.http_error(url, fp, errcode, errmsg, headers)
            else:
                return self.http_error(url, fp, errcode, errmsg, headers, data)

    def http_error(self, url, fp, errcode, errmsg, headers, data=None):
        """Handle http errors.
        Derived class can override this, or provide specific handlers
        named http_error_DDD where DDD is the 3-digit error code."""
        # First check if there's a specific handler for this error
        name = 'http_error_%d' % errcode
        if hasattr(self, name):
            method = getattr(self, name)
            if data is None:
                result = method(url, fp, errcode, errmsg, headers)
            else:
                result = method(url, fp, errcode, errmsg, headers, data)
            if result: return result
        return self.http_error_default(url, fp, errcode, errmsg, headers)

    def http_error_default(self, url, fp, errcode, errmsg, headers):
        """Default error handler: close the connection and raise IOError."""
        fp.close()
        raise IOError, ('http error', errcode, errmsg, headers)

    if _have_ssl:
        def open_https(self, url, data=None):
            """Use HTTPS protocol."""

            import httplib
            user_passwd = None
            proxy_passwd = None
            if isinstance(url, str):
                host, selector = splithost(url)
                if host:
                    user_passwd, host = splituser(host)
                    host = unquote(host)
                realhost = host
            else:
                host, selector = url
                # here, we determine, whether the proxy contains authorization information
                proxy_passwd, host = splituser(host)
                urltype, rest = splittype(selector)
                url = rest
                user_passwd = None
                if urltype.lower() != 'https':
                    realhost = None
                else:
                    realhost, rest = splithost(rest)
                    if realhost:
                        user_passwd, realhost = splituser(realhost)
                    if user_passwd:
                        selector = "%s://%s%s" % (urltype, realhost, rest)
                #print "proxy via https:", host, selector
            if not host: raise IOError, ('https error', 'no host given')
            if proxy_passwd:
                proxy_passwd = unquote(proxy_passwd)
                proxy_auth = base64.b64encode(proxy_passwd).strip()
            else:
                proxy_auth = None
            if user_passwd:
                user_passwd = unquote(user_passwd)
                auth = base64.b64encode(user_passwd).strip()
            else:
                auth = None
            h = httplib.HTTPS(host, 0,
                              key_file=self.key_file,
                              cert_file=self.cert_file,
                              context=self.context)
            if data is not None:
                h.putrequest('POST', selector)
                h.putheader('Content-Type',
                            'application/x-www-form-urlencoded')
                h.putheader('Content-Length', '%d' % len(data))
            else:
                h.putrequest('GET', selector)
            if proxy_auth: h.putheader('Proxy-Authorization', 'Basic %s' % proxy_auth)
            if auth: h.putheader('Authorization', 'Basic %s' % auth)
            if realhost: h.putheader('Host', realhost)
            for args in self.addheaders: h.putheader(*args)
            h.endheaders(data)
            errcode, errmsg, headers = h.getreply()
            fp = h.getfile()
            if errcode == -1:
                if fp: fp.close()
                # something went wrong with the HTTP status line
                raise IOError, ('http protocol error', 0,
                                'got a bad status line', None)
            # According to RFC 2616, "2xx" code indicates that the client's
            # request was successfully received, understood, and accepted.
            if (200 <= errcode < 300):
                return addinfourl(fp, headers, "https:" + url, errcode)
            else:
                if data is None:
                    return self.http_error(url, fp, errcode, errmsg, headers)
                else:
                    return self.http_error(url, fp, errcode, errmsg, headers,
                                           data)

    def open_file(self, url):
        """Use local file or FTP depending on form of URL."""
        if not isinstance(url, str):
            raise IOError, ('file error', 'proxy support for file protocol currently not implemented')
        if url[:2] == '//' and url[2:3] != '/' and url[2:12].lower() != 'localhost/':
            return self.open_ftp(url)
        else:
            return self.open_local_file(url)

    def open_local_file(self, url):
        """Use local file."""
        import mimetypes, mimetools, email.utils
        try:
            from cStringIO import StringIO
        except ImportError:
            from StringIO import StringIO
        host, file = splithost(url)
        localname = url2pathname(file)
        try:
            stats = os.stat(localname)
        except OSError, e:
            raise IOError(e.errno, e.strerror, e.filename)
        size = stats.st_size
        modified = email.utils.formatdate(stats.st_mtime, usegmt=True)
        mtype = mimetypes.guess_type(url)[0]
        headers = mimetools.Message(StringIO(
            'Content-Type: %s\nContent-Length: %d\nLast-modified: %s\n' %
            (mtype or 'text/plain', size, modified)))
        if not host:
            urlfile = file
            if file[:1] == '/':
                urlfile = 'file://' + file
            elif file[:2] == './':
                raise ValueError("local file url may start with / or file:. Unknown url of type: %s" % url)
            return addinfourl(open(localname, 'rb'),
                              headers, urlfile)
        host, port = splitport(host)
        if not port \
           and socket.gethostbyname(host) in (localhost(), thishost()):
            urlfile = file
            if file[:1] == '/':
                urlfile = 'file://' + file
            return addinfourl(open(localname, 'rb'),
                              headers, urlfile)
        raise IOError, ('local file error', 'not on local host')

    def open_ftp(self, url):
        """Use FTP protocol."""
        if not isinstance(url, str):
            raise IOError, ('ftp error', 'proxy support for ftp protocol currently not implemented')
        import mimetypes, mimetools
        try:
            from cStringIO import StringIO
        except ImportError:
            from StringIO import StringIO
        host, path = splithost(url)
        if not host: raise IOError, ('ftp error', 'no host given')
        host, port = splitport(host)
        user, host = splituser(host)
        if user: user, passwd = splitpasswd(user)
        else: passwd = None
        host = unquote(host)
        user = user or ''
        passwd = passwd or ''
        host = socket.gethostbyname(host)
        if not port:
            import ftplib
            port = ftplib.FTP_PORT
        else:
            port = int(port)
        path, attrs = splitattr(path)
        path = unquote(path)
        dirs = path.split('/')
        dirs, file = dirs[:-1], dirs[-1]
        if dirs and not dirs[0]: dirs = dirs[1:]
        if dirs and not dirs[0]: dirs[0] = '/'
        key = user, host, port, '/'.join(dirs)
        # XXX thread unsafe!
        if len(self.ftpcache) > MAXFTPCACHE:
            # Prune the cache, rather arbitrarily
            for k in self.ftpcache.keys():
                if k != key:
                    v = self.ftpcache[k]
                    del self.ftpcache[k]
                    v.close()
        try:
            if not key in self.ftpcache:
                self.ftpcache[key] = \
                    ftpwrapper(user, passwd, host, port, dirs)
            if not file: type = 'D'
            else: type = 'I'
            for attr in attrs:
                attr, value = splitvalue(attr)
                if attr.lower() == 'type' and \
                   value in ('a', 'A', 'i', 'I', 'd', 'D'):
                    type = value.upper()
            (fp, retrlen) = self.ftpcache[key].retrfile(file, type)
            mtype = mimetypes.guess_type("ftp:" + url)[0]
            headers = ""
            if mtype:
                headers += "Content-Type: %s\n" % mtype
            if retrlen is not None and retrlen >= 0:
                headers += "Content-Length: %d\n" % retrlen
            headers = mimetools.Message(StringIO(headers))
            return addinfourl(fp, headers, "ftp:" + url)
        except ftperrors(), msg:
            raise IOError, ('ftp error', msg), sys.exc_info()[2]

    def open_data(self, url, data=None):
        """Use "data" URL."""
        if not isinstance(url, str):
            raise IOError, ('data error', 'proxy support for data protocol currently not implemented')
        # ignore POSTed data
        #
        # syntax of data URLs:
        # dataurl   := "data:" [ mediatype ] [ ";base64" ] "," data
        # mediatype := [ type "/" subtype ] *( ";" parameter )
        # data      := *urlchar
        # parameter := attribute "=" value
        import mimetools
        try:
            from cStringIO import StringIO
        except ImportError:
            from StringIO import StringIO
        try:
            [type, data] = url.split(',', 1)
        except ValueError:
            raise IOError, ('data error', 'bad data URL')
        if not type:
            type = 'text/plain;charset=US-ASCII'
        semi = type.rfind(';')
        if semi >= 0 and '=' not in type[semi:]:
            encoding = type[semi+1:]
            type = type[:semi]
        else:
            encoding = ''
        msg = []
        msg.append('Date: %s'%time.strftime('%a, %d %b %Y %H:%M:%S GMT',
                                            time.gmtime(time.time())))
        msg.append('Content-type: %s' % type)
        if encoding == 'base64':
            data = base64.decodestring(data)
        else:
            data = unquote(data)
        msg.append('Content-Length: %d' % len(data))
        msg.append('')
        msg.append(data)
        msg = '\n'.join(msg)
        f = StringIO(msg)
        headers = mimetools.Message(f, 0)
        #f.fileno = None     # needed for addinfourl
        return addinfourl(f, headers, url)


class FancyURLopener(URLopener):
    """Derived class with handlers for errors we can handle (perhaps)."""

    def __init__(self, *args, **kwargs):
        URLopener.__init__(self, *args, **kwargs)
        self.auth_cache = {}
        self.tries = 0
        self.maxtries = 10

    def http_error_default(self, url, fp, errcode, errmsg, headers):
        """Default error handling -- don't raise an exception."""
        return addinfourl(fp, headers, "http:" + url, errcode)

    def http_error_302(self, url, fp, errcode, errmsg, headers, data=None):
        """Error 302 -- relocated (temporarily)."""
        self.tries += 1
        try:
            if self.maxtries and self.tries >= self.maxtries:
                if hasattr(self, "http_error_500"):
                    meth = self.http_error_500
                else:
                    meth = self.http_error_default
                return meth(url, fp, 500,
                            "Internal Server Error: Redirect Recursion",
                            headers)
            result = self.redirect_internal(url, fp, errcode, errmsg,
                                            headers, data)
            return result
        finally:
            self.tries = 0

    def redirect_internal(self, url, fp, errcode, errmsg, headers, data):
        if 'location' in headers:
            newurl = headers['location']
        elif 'uri' in headers:
            newurl = headers['uri']
        else:
            return
        fp.close()
        # In case the server sent a relative URL, join with original:
        newurl = basejoin(self.type + ":" + url, newurl)

        # For security reasons we do not allow redirects to protocols
        # other than HTTP, HTTPS or FTP.
        newurl_lower = newurl.lower()
        if not (newurl_lower.startswith('http://') or
                newurl_lower.startswith('https://') or
                newurl_lower.startswith('ftp://')):
            raise IOError('redirect error', errcode,
                          errmsg + " - Redirection to url '%s' is not allowed" %
                          newurl,
                          headers)

        return self.open(newurl)

    def http_error_301(self, url, fp, errcode, errmsg, headers, data=None):
        """Error 301 -- also relocated (permanently)."""
        return self.http_error_302(url, fp, errcode, errmsg, headers, data)

    def http_error_303(self, url, fp, errcode, errmsg, headers, data=None):
        """Error 303 -- also relocated (essentially identical to 302)."""
        return self.http_error_302(url, fp, errcode, errmsg, headers, data)

    def http_error_307(self, url, fp, errcode, errmsg, headers, data=None):
        """Error 307 -- relocated, but turn POST into error."""
        if data is None:
            return self.http_error_302(url, fp, errcode, errmsg, headers, data)
        else:
            return self.http_error_default(url, fp, errcode, errmsg, headers)

    def http_error_401(self, url, fp, errcode, errmsg, headers, data=None):
        """Error 401 -- authentication required.
        This function supports Basic authentication only."""
        if not 'www-authenticate' in headers:
            URLopener.http_error_default(self, url, fp,
                                         errcode, errmsg, headers)
        stuff = headers['www-authenticate']
        import re
        match = re.match('[ \t]*([^ \t]+)[ \t]+realm="([^"]*)"', stuff)
        if not match:
            URLopener.http_error_default(self, url, fp,
                                         errcode, errmsg, headers)
        scheme, realm = match.groups()
        if scheme.lower() != 'basic':
            URLopener.http_error_default(self, url, fp,
                                         errcode, errmsg, headers)
        name = 'retry_' + self.type + '_basic_auth'
        if data is None:
            return getattr(self,name)(url, realm)
        else:
            return getattr(self,name)(url, realm, data)

    def http_error_407(self, url, fp, errcode, errmsg, headers, data=None):
        """Error 407 -- proxy authentication required.
        This function supports Basic authentication only."""
        if not 'proxy-authenticate' in headers:
            URLopener.http_error_default(self, url, fp,
                                         errcode, errmsg, headers)
        stuff = headers['proxy-authenticate']
        import re
        match = re.match('[ \t]*([^ \t]+)[ \t]+realm="([^"]*)"', stuff)
        if not match:
            URLopener.http_error_default(self, url, fp,
                                         errcode, errmsg, headers)
        scheme, realm = match.groups()
        if scheme.lower() != 'basic':
            URLopener.http_error_default(self, url, fp,
                                         errcode, errmsg, headers)
        name = 'retry_proxy_' + self.type + '_basic_auth'
        if data is None:
            return getattr(self,name)(url, realm)
        else:
            return getattr(self,name)(url, realm, data)

    def retry_proxy_http_basic_auth(self, url, realm, data=None):
        host, selector = splithost(url)
        newurl = 'http://' + host + selector
        proxy = self.proxies['http']
        urltype, proxyhost = splittype(proxy)
        proxyhost, proxyselector = splithost(proxyhost)
        i = proxyhost.find('@') + 1
        proxyhost = proxyhost[i:]
        user, passwd = self.get_user_passwd(proxyhost, realm, i)
        if not (user or passwd): return None
        proxyhost = quote(user, safe='') + ':' + quote(passwd, safe='') + '@' + proxyhost
        self.proxies['http'] = 'http://' + proxyhost + proxyselector
        if data is None:
            return self.open(newurl)
        else:
            return self.open(newurl, data)

    def retry_proxy_https_basic_auth(self, url, realm, data=None):
        host, selector = splithost(url)
        newurl = 'https://' + host + selector
        proxy = self.proxies['https']
        urltype, proxyhost = splittype(proxy)
        proxyhost, proxyselector = splithost(proxyhost)
        i = proxyhost.find('@') + 1
        proxyhost = proxyhost[i:]
        user, passwd = self.get_user_passwd(proxyhost, realm, i)
        if not (user or passwd): return None
        proxyhost = quote(user, safe='') + ':' + quote(passwd, safe='') + '@' + proxyhost
        self.proxies['https'] = 'https://' + proxyhost + proxyselector
        if data is None:
            return self.open(newurl)
        else:
            return self.open(newurl, data)

    def retry_http_basic_auth(self, url, realm, data=None):
        host, selector = splithost(url)
        i = host.find('@') + 1
        host = host[i:]
        user, passwd = self.get_user_passwd(host, realm, i)
        if not (user or passwd): return None
        host = quote(user, safe='') + ':' + quote(passwd, safe='') + '@' + host
        newurl = 'http://' + host + selector
        if data is None:
            return self.open(newurl)
        else:
            return self.open(newurl, data)

    def retry_https_basic_auth(self, url, realm, data=None):
        host, selector = splithost(url)
        i = host.find('@') + 1
        host = host[i:]
        user, passwd = self.get_user_passwd(host, realm, i)
        if not (user or passwd): return None
        host = quote(user, safe='') + ':' + quote(passwd, safe='') + '@' + host
        newurl = 'https://' + host + selector
        if data is None:
            return self.open(newurl)
        else:
            return self.open(newurl, data)

    def get_user_passwd(self, host, realm, clear_cache=0):
        key = realm + '@' + host.lower()
        if key in self.auth_cache:
            if clear_cache:
                del self.auth_cache[key]
            else:
                return self.auth_cache[key]
        user, passwd = self.prompt_user_passwd(host, realm)
        if user or passwd: self.auth_cache[key] = (user, passwd)
        return user, passwd

    def prompt_user_passwd(self, host, realm):
        """Override this in a GUI environment!"""
        import getpass
        try:
            user = raw_input("Enter username for %s at %s: " % (realm,
                                                                host))
            passwd = getpass.getpass("Enter password for %s in %s at %s: " %
                (user, realm, host))
            return user, passwd
        except KeyboardInterrupt:
            print
            return None, None


# Utility functions

_localhost = None
def localhost():
    """Return the IP address of the magic hostname 'localhost'."""
    global _localhost
    if _localhost is None:
        _localhost = socket.gethostbyname('localhost')
    return _localhost

_thishost = None
def thishost():
    """Return the IP address of the current host."""
    global _thishost
    if _thishost is None:
        try:
            _thishost = socket.gethostbyname(socket.gethostname())
        except socket.gaierror:
            _thishost = socket.gethostbyname('localhost')
    return _thishost

_ftperrors = None
def ftperrors():
    """Return the set of errors raised by the FTP class."""
    global _ftperrors
    if _ftperrors is None:
        import ftplib
        _ftperrors = ftplib.all_errors
    return _ftperrors

_noheaders = None
def noheaders():
    """Return an empty mimetools.Message object."""
    global _noheaders
    if _noheaders is None:
        import mimetools
        try:
            from cStringIO import StringIO
        except ImportError:
            from StringIO import StringIO
        _noheaders = mimetools.Message(StringIO(), 0)
        _noheaders.fp.close()   # Recycle file descriptor
    return _noheaders


# Utility classes

class ftpwrapper:
    """Class used by open_ftp() for cache of open FTP connections."""

    def __init__(self, user, passwd, host, port, dirs,
                 timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
                 persistent=True):
        self.user = user
        self.passwd = passwd
        self.host = host
        self.port = port
        self.dirs = dirs
        self.timeout = timeout
        self.refcount = 0
        self.keepalive = persistent
        try:
            self.init()
        except:
            self.close()
            raise

    def init(self):
        import ftplib
        self.busy = 0
        self.ftp = ftplib.FTP()
        self.ftp.connect(self.host, self.port, self.timeout)
        self.ftp.login(self.user, self.passwd)
        _target = '/'.join(self.dirs)
        self.ftp.cwd(_target)

    def retrfile(self, file, type):
        import ftplib
        self.endtransfer()
        if type in ('d', 'D'): cmd = 'TYPE A'; isdir = 1
        else: cmd = 'TYPE ' + type; isdir = 0
        try:
            self.ftp.voidcmd(cmd)
        except ftplib.all_errors:
            self.init()
            self.ftp.voidcmd(cmd)
        conn = None
        if file and not isdir:
            # Try to retrieve as a file
            try:
                cmd = 'RETR ' + file
                conn, retrlen = self.ftp.ntransfercmd(cmd)
            except ftplib.error_perm, reason:
                if str(reason)[:3] != '550':
                    raise IOError, ('ftp error', reason), sys.exc_info()[2]
        if not conn:
            # Set transfer mode to ASCII!
            self.ftp.voidcmd('TYPE A')
            # Try a directory listing. Verify that directory exists.
            if file:
                pwd = self.ftp.pwd()
                try:
                    try:
                        self.ftp.cwd(file)
                    except ftplib.error_perm, reason:
                        raise IOError, ('ftp error', reason), sys.exc_info()[2]
                finally:
                    self.ftp.cwd(pwd)
                cmd = 'LIST ' + file
            else:
                cmd = 'LIST'
            conn, retrlen = self.ftp.ntransfercmd(cmd)
        self.busy = 1
        ftpobj = addclosehook(conn.makefile('rb'), self.file_close)
        self.refcount += 1
        conn.close()
        # Pass back both a suitably decorated object and a retrieval length
        return (ftpobj, retrlen)

    def endtransfer(self):
        if not self.busy:
            return
        self.busy = 0
        try:
            self.ftp.voidresp()
        except ftperrors():
            pass

    def close(self):
        self.keepalive = False
        if self.refcount <= 0:
            self.real_close()

    def file_close(self):
        self.endtransfer()
        self.refcount -= 1
        if self.refcount <= 0 and not self.keepalive:
            self.real_close()

    def real_close(self):
        self.endtransfer()
        try:
            self.ftp.close()
        except ftperrors():
            pass

class addbase:
    """Base class for addinfo and addclosehook."""

    def __init__(self, fp):
        self.fp = fp
        self.read = self.fp.read
        self.readline = self.fp.readline
        if hasattr(self.fp, "readlines"): self.readlines = self.fp.readlines
        if hasattr(self.fp, "fileno"):
            self.fileno = self.fp.fileno
        else:
            self.fileno = lambda: None
        if hasattr(self.fp, "__iter__"):
            self.__iter__ = self.fp.__iter__
            if hasattr(self.fp, "next"):
                self.next = self.fp.next

    def __repr__(self):
        return '<%s at %r whose fp = %r>' % (self.__class__.__name__,
                                             id(self), self.fp)

    def close(self):
        self.read = None
        self.readline = None
        self.readlines = None
        self.fileno = None
        if self.fp: self.fp.close()
        self.fp = None

class addclosehook(addbase):
    """Class to add a close hook to an open file."""

    def __init__(self, fp, closehook, *hookargs):
        addbase.__init__(self, fp)
        self.closehook = closehook
        self.hookargs = hookargs

    def close(self):
        try:
            closehook = self.closehook
            hookargs = self.hookargs
            if closehook:
                self.closehook = None
                self.hookargs = None
                closehook(*hookargs)
        finally:
            addbase.close(self)


class addinfo(addbase):
    """class to add an info() method to an open file."""

    def __init__(self, fp, headers):
        addbase.__init__(self, fp)
        self.headers = headers

    def info(self):
        return self.headers

class addinfourl(addbase):
    """class to add info() and geturl() methods to an open file."""

    def __init__(self, fp, headers, url, code=None):
        addbase.__init__(self, fp)
        self.headers = headers
        self.url = url
        self.code = code

    def info(self):
        return self.headers

    def getcode(self):
        return self.code

    def geturl(self):
        return self.url


# Utilities to parse URLs (most of these return None for missing parts):
# unwrap('<URL:type://host/path>') --> 'type://host/path'
# splittype('type:opaquestring') --> 'type', 'opaquestring'
# splithost('//host[:port]/path') --> 'host[:port]', '/path'
# splituser('user[:passwd]@host[:port]') --> 'user[:passwd]', 'host[:port]'
# splitpasswd('user:passwd') -> 'user', 'passwd'
# splitport('host:port') --> 'host', 'port'
# splitquery('/path?query') --> '/path', 'query'
# splittag('/path#tag') --> '/path', 'tag'
# splitattr('/path;attr1=value1;attr2=value2;...') ->
#   '/path', ['attr1=value1', 'attr2=value2', ...]
# splitvalue('attr=value') --> 'attr', 'value'
# unquote('abc%20def') -> 'abc def'
# quote('abc def') -> 'abc%20def')

try:
    unicode
except NameError:
    def _is_unicode(x):
        return 0
else:
    def _is_unicode(x):
        return isinstance(x, unicode)

def toBytes(url):
    """toBytes(u"URL") --> 'URL'."""
    # Most URL schemes require ASCII. If that changes, the conversion
    # can be relaxed
    if _is_unicode(url):
        try:
            url = url.encode("ASCII")
        except UnicodeError:
            raise UnicodeError("URL " + repr(url) +
                               " contains non-ASCII characters")
    return url

def unwrap(url):
    """unwrap('<URL:type://host/path>') --> 'type://host/path'."""
    url = url.strip()
    if url[:1] == '<' and url[-1:] == '>':
        url = url[1:-1].strip()
    if url[:4] == 'URL:': url = url[4:].strip()
    return url

_typeprog = None
def splittype(url):
    """splittype('type:opaquestring') --> 'type', 'opaquestring'."""
    global _typeprog
    if _typeprog is None:
        import re
        _typeprog = re.compile('^([^/:]+):')

    match = _typeprog.match(url)
    if match:
        scheme = match.group(1)
        return scheme.lower(), url[len(scheme) + 1:]
    return None, url

_hostprog = None
def splithost(url):
    """splithost('//host[:port]/path') --> 'host[:port]', '/path'."""
    global _hostprog
    if _hostprog is None:
        _hostprog = re.compile('//([^/#?]*)(.*)', re.DOTALL)

    match = _hostprog.match(url)
    if match:
        host_port = match.group(1)
        path = match.group(2)
        if path and not path.startswith('/'):
            path = '/' + path
        return host_port, path
    return None, url

_userprog = None
def splituser(host):
    """splituser('user[:passwd]@host[:port]') --> 'user[:passwd]', 'host[:port]'."""
    global _userprog
    if _userprog is None:
        import re
        _userprog = re.compile('^(.*)@(.*)$')

    match = _userprog.match(host)
    if match: return match.group(1, 2)
    return None, host

_passwdprog = None
def splitpasswd(user):
    """splitpasswd('user:passwd') -> 'user', 'passwd'."""
    global _passwdprog
    if _passwdprog is None:
        import re
        _passwdprog = re.compile('^([^:]*):(.*)$',re.S)

    match = _passwdprog.match(user)
    if match: return match.group(1, 2)
    return user, None

# splittag('/path#tag') --> '/path', 'tag'
_portprog = None
def splitport(host):
    """splitport('host:port') --> 'host', 'port'."""
    global _portprog
    if _portprog is None:
        import re
        _portprog = re.compile('^(.*):([0-9]*)$')

    match = _portprog.match(host)
    if match:
        host, port = match.groups()
        if port:
            return host, port
    return host, None

_nportprog = None
def splitnport(host, defport=-1):
    """Split host and port, returning numeric port.
    Return given default port if no ':' found; defaults to -1.
    Return numerical port if a valid number are found after ':'.
    Return None if ':' but not a valid number."""
    global _nportprog
    if _nportprog is None:
        import re
        _nportprog = re.compile('^(.*):(.*)$')

    match = _nportprog.match(host)
    if match:
        host, port = match.group(1, 2)
        if port:
            try:
                nport = int(port)
            except ValueError:
                nport = None
            return host, nport
    return host, defport

_queryprog = None
def splitquery(url):
    """splitquery('/path?query') --> '/path', 'query'."""
    global _queryprog
    if _queryprog is None:
        import re
        _queryprog = re.compile('^(.*)\?([^?]*)$')

    match = _queryprog.match(url)
    if match: return match.group(1, 2)
    return url, None

_tagprog = None
def splittag(url):
    """splittag('/path#tag') --> '/path', 'tag'."""
    global _tagprog
    if _tagprog is None:
        import re
        _tagprog = re.compile('^(.*)#([^#]*)$')

    match = _tagprog.match(url)
    if match: return match.group(1, 2)
    return url, None

def splitattr(url):
    """splitattr('/path;attr1=value1;attr2=value2;...') ->
        '/path', ['attr1=value1', 'attr2=value2', ...]."""
    words = url.split(';')
    return words[0], words[1:]

_valueprog = None
def splitvalue(attr):
    """splitvalue('attr=value') --> 'attr', 'value'."""
    global _valueprog
    if _valueprog is None:
        import re
        _valueprog = re.compile('^([^=]*)=(.*)$')

    match = _valueprog.match(attr)
    if match: return match.group(1, 2)
    return attr, None

# urlparse contains a duplicate of this method to avoid a circular import.  If
# you update this method, also update the copy in urlparse.  This code
# duplication does not exist in Python3.

_hexdig = '0123456789ABCDEFabcdef'
_hextochr = dict((a + b, chr(int(a + b, 16)))
                 for a in _hexdig for b in _hexdig)
_asciire = re.compile('([\x00-\x7f]+)')

def unquote(s):
    """unquote('abc%20def') -> 'abc def'."""
    if _is_unicode(s):
        if '%' not in s:
            return s
        bits = _asciire.split(s)
        res = [bits[0]]
        append = res.append
        for i in range(1, len(bits), 2):
            append(unquote(str(bits[i])).decode('latin1'))
            append(bits[i + 1])
        return ''.join(res)

    bits = s.split('%')
    # fastpath
    if len(bits) == 1:
        return s
    res = [bits[0]]
    append = res.append
    for item in bits[1:]:
        try:
            append(_hextochr[item[:2]])
            append(item[2:])
        except KeyError:
            append('%')
            append(item)
    return ''.join(res)

def unquote_plus(s):
    """unquote('%7e/abc+def') -> '~/abc def'"""
    s = s.replace('+', ' ')
    return unquote(s)

always_safe = ('ABCDEFGHIJKLMNOPQRSTUVWXYZ'
               'abcdefghijklmnopqrstuvwxyz'
               '0123456789' '_.-')
_safe_map = {}
for i, c in zip(xrange(256), str(bytearray(xrange(256)))):
    _safe_map[c] = c if (i < 128 and c in always_safe) else '%{:02X}'.format(i)
_safe_quoters = {}

def quote(s, safe='/'):
    """quote('abc def') -> 'abc%20def'

    Each part of a URL, e.g. the path info, the query, etc., has a
    different set of reserved characters that must be quoted.

    RFC 2396 Uniform Resource Identifiers (URI): Generic Syntax lists
    the following reserved characters.

    reserved    = ";" | "/" | "?" | ":" | "@" | "&" | "=" | "+" |
                  "$" | ","

    Each of these characters is reserved in some component of a URL,
    but not necessarily in all of them.

    By default, the quote function is intended for quoting the path
    section of a URL.  Thus, it will not encode '/'.  This character
    is reserved, but in typical usage the quote function is being
    called on a path where the existing slash characters are used as
    reserved characters.
    """
    # fastpath
    if not s:
        if s is None:
            raise TypeError('None object cannot be quoted')
        return s
    cachekey = (safe, always_safe)
    try:
        (quoter, safe) = _safe_quoters[cachekey]
    except KeyError:
        safe_map = _safe_map.copy()
        safe_map.update([(c, c) for c in safe])
        quoter = safe_map.__getitem__
        safe = always_safe + safe
        _safe_quoters[cachekey] = (quoter, safe)
    if not s.rstrip(safe):
        return s
    return ''.join(map(quoter, s))

def quote_plus(s, safe=''):
    """Quote the query fragment of a URL; replacing ' ' with '+'"""
    if ' ' in s:
        s = quote(s, safe + ' ')
        return s.replace(' ', '+')
    return quote(s, safe)

def urlencode(query, doseq=0):
    """Encode a sequence of two-element tuples or dictionary into a URL query string.

    If any values in the query arg are sequences and doseq is true, each
    sequence element is converted to a separate parameter.

    If the query arg is a sequence of two-element tuples, the order of the
    parameters in the output will match the order of parameters in the
    input.
    """

    if hasattr(query,"items"):
        # mapping objects
        query = query.items()
    else:
        # it's a bother at times that strings and string-like objects are
        # sequences...
        try:
            # non-sequence items should not work with len()
            # non-empty strings will fail this
            if len(query) and not isinstance(query[0], tuple):
                raise TypeError
            # zero-length sequences of all types will get here and succeed,
            # but that's a minor nit - since the original implementation
            # allowed empty dicts that type of behavior probably should be
            # preserved for consistency
        except TypeError:
            ty,va,tb = sys.exc_info()
            raise TypeError, "not a valid non-string sequence or mapping object", tb

    l = []
    if not doseq:
        # preserve old behavior
        for k, v in query:
            k = quote_plus(str(k))
            v = quote_plus(str(v))
            l.append(k + '=' + v)
    else:
        for k, v in query:
            k = quote_plus(str(k))
            if isinstance(v, str):
                v = quote_plus(v)
                l.append(k + '=' + v)
            elif _is_unicode(v):
                # is there a reasonable way to convert to ASCII?
                # encode generates a string, but "replace" or "ignore"
                # lose information and "strict" can raise UnicodeError
                v = quote_plus(v.encode("ASCII","replace"))
                l.append(k + '=' + v)
            else:
                try:
                    # is this a sufficient test for sequence-ness?
                    len(v)
                except TypeError:
                    # not a sequence
                    v = quote_plus(str(v))
                    l.append(k + '=' + v)
                else:
                    # loop over the sequence
                    for elt in v:
                        l.append(k + '=' + quote_plus(str(elt)))
    return '&'.join(l)

# Proxy handling
def getproxies_environment():
    """Return a dictionary of scheme -> proxy server URL mappings.

    Scan the environment for variables named <scheme>_proxy;
    this seems to be the standard convention.  In order to prefer lowercase
    variables, we process the environment in two passes, first matches any
    and second matches only lower case proxies.

    If you need a different way, you can pass a proxies dictionary to the
    [Fancy]URLopener constructor.
    """
    # Get all variables
    proxies = {}
    for name, value in os.environ.items():
        name = name.lower()
        if value and name[-6:] == '_proxy':
            proxies[name[:-6]] = value

    # CVE-2016-1000110 - If we are running as CGI script, forget HTTP_PROXY
    # (non-all-lowercase) as it may be set from the web server by a "Proxy:"
    # header from the client
    # If "proxy" is lowercase, it will still be used thanks to the next block
    if 'REQUEST_METHOD' in os.environ:
        proxies.pop('http', None)

    # Get lowercase variables
    for name, value in os.environ.items():
        if name[-6:] == '_proxy':
            name = name.lower()
            if value:
                proxies[name[:-6]] = value
            else:
                proxies.pop(name[:-6], None)

    return proxies

def proxy_bypass_environment(host, proxies=None):
    """Test if proxies should not be used for a particular host.

    Checks the proxies dict for the value of no_proxy, which should be a
    list of comma separated DNS suffixes, or '*' for all hosts.
    """
    if proxies is None:
        proxies = getproxies_environment()
    # don't bypass, if no_proxy isn't specified
    try:
        no_proxy = proxies['no']
    except KeyError:
        return 0
    # '*' is special case for always bypass
    if no_proxy == '*':
        return 1
    # strip port off host
    hostonly, port = splitport(host)
    # check if the host ends with any of the DNS suffixes
    no_proxy_list = [proxy.strip() for proxy in no_proxy.split(',')]
    for name in no_proxy_list:
        if name:
            name = name.lstrip('.')  # ignore leading dots
            name = re.escape(name)
            pattern = r'(.+\.)?%s$' % name
            if (re.match(pattern, hostonly, re.I)
                    or re.match(pattern, host, re.I)):
                return 1
    # otherwise, don't bypass
    return 0


if sys.platform == 'darwin':
    from _scproxy import _get_proxy_settings, _get_proxies

    def proxy_bypass_macosx_sysconf(host):
        """
        Return True iff this host shouldn't be accessed using a proxy

        This function uses the MacOSX framework SystemConfiguration
        to fetch the proxy information.
        """
        import re
        import socket
        from fnmatch import fnmatch

        hostonly, port = splitport(host)

        def ip2num(ipAddr):
            parts = ipAddr.split('.')
            parts = map(int, parts)
            if len(parts) != 4:
                parts = (parts + [0, 0, 0, 0])[:4]
            return (parts[0] << 24) | (parts[1] << 16) | (parts[2] << 8) | parts[3]

        proxy_settings = _get_proxy_settings()

        # Check for simple host names:
        if '.' not in host:
            if proxy_settings['exclude_simple']:
                return True

        hostIP = None

        for value in proxy_settings.get('exceptions', ()):
            # Items in the list are strings like these: *.local, 169.254/16
            if not value: continue

            m = re.match(r"(\d+(?:\.\d+)*)(/\d+)?", value)
            if m is not None:
                if hostIP is None:
                    try:
                        hostIP = socket.gethostbyname(hostonly)
                        hostIP = ip2num(hostIP)
                    except socket.error:
                        continue

                base = ip2num(m.group(1))
                mask = m.group(2)
                if mask is None:
                    mask = 8 * (m.group(1).count('.') + 1)

                else:
                    mask = int(mask[1:])
                mask = 32 - mask

                if (hostIP >> mask) == (base >> mask):
                    return True

            elif fnmatch(host, value):
                return True

        return False

    def getproxies_macosx_sysconf():
        """Return a dictionary of scheme -> proxy server URL mappings.

        This function uses the MacOSX framework SystemConfiguration
        to fetch the proxy information.
        """
        return _get_proxies()

    def proxy_bypass(host):
        """Return True, if a host should be bypassed.

        Checks proxy settings gathered from the environment, if specified, or
        from the MacOSX framework SystemConfiguration.
        """
        proxies = getproxies_environment()
        if proxies:
            return proxy_bypass_environment(host, proxies)
        else:
            return proxy_bypass_macosx_sysconf(host)

    def getproxies():
        return getproxies_environment() or getproxies_macosx_sysconf()

elif os.name == 'nt':
    def getproxies_registry():
        """Return a dictionary of scheme -> proxy server URL mappings.

        Win32 uses the registry to store proxies.

        """
        proxies = {}
        try:
            import _winreg
        except ImportError:
            # Std module, so should be around - but you never know!
            return proxies
        try:
            internetSettings = _winreg.OpenKey(_winreg.HKEY_CURRENT_USER,
                r'Software\Microsoft\Windows\CurrentVersion\Internet Settings')
            proxyEnable = _winreg.QueryValueEx(internetSettings,
                                               'ProxyEnable')[0]
            if proxyEnable:
                # Returned as Unicode but problems if not converted to ASCII
                proxyServer = str(_winreg.QueryValueEx(internetSettings,
                                                       'ProxyServer')[0])
                if '=' in proxyServer:
                    # Per-protocol settings
                    for p in proxyServer.split(';'):
                        protocol, address = p.split('=', 1)
                        # See if address has a type:// prefix
                        import re
                        if not re.match('^([^/:]+)://', address):
                            address = '%s://%s' % (protocol, address)
                        proxies[protocol] = address
                else:
                    # Use one setting for all protocols
                    if proxyServer[:5] == 'http:':
                        proxies['http'] = proxyServer
                    else:
                        proxies['http'] = 'http://%s' % proxyServer
                        proxies['https'] = 'https://%s' % proxyServer
                        proxies['ftp'] = 'ftp://%s' % proxyServer
            internetSettings.Close()
        except (WindowsError, ValueError, TypeError):
            # Either registry key not found etc, or the value in an
            # unexpected format.
            # proxies already set up to be empty so nothing to do
            pass
        return proxies

    def getproxies():
        """Return a dictionary of scheme -> proxy server URL mappings.

        Returns settings gathered from the environment, if specified,
        or the registry.

        """
        return getproxies_environment() or getproxies_registry()

    def proxy_bypass_registry(host):
        try:
            import _winreg
            import re
        except ImportError:
            # Std modules, so should be around - but you never know!
            return 0
        try:
            internetSettings = _winreg.OpenKey(_winreg.HKEY_CURRENT_USER,
                r'Software\Microsoft\Windows\CurrentVersion\Internet Settings')
            proxyEnable = _winreg.QueryValueEx(internetSettings,
                                               'ProxyEnable')[0]
            proxyOverride = str(_winreg.QueryValueEx(internetSettings,
                                                     'ProxyOverride')[0])
            # ^^^^ Returned as Unicode but problems if not converted to ASCII
        except WindowsError:
            return 0
        if not proxyEnable or not proxyOverride:
            return 0
        # try to make a host list from name and IP address.
        rawHost, port = splitport(host)
        host = [rawHost]
        try:
            addr = socket.gethostbyname(rawHost)
            if addr != rawHost:
                host.append(addr)
        except socket.error:
            pass
        try:
            fqdn = socket.getfqdn(rawHost)
            if fqdn != rawHost:
                host.append(fqdn)
        except socket.error:
            pass
        # make a check value list from the registry entry: replace the
        # '<local>' string by the localhost entry and the corresponding
        # canonical entry.
        proxyOverride = proxyOverride.split(';')
        # now check if we match one of the registry values.
        for test in proxyOverride:
            if test == '<local>':
                if '.' not in rawHost:
                    return 1
            test = test.replace(".", r"\.")     # mask dots
            test = test.replace("*", r".*")     # change glob sequence
            test = test.replace("?", r".")      # change glob char
            for val in host:
                # print "%s <--> %s" %( test, val )
                if re.match(test, val, re.I):
                    return 1
        return 0

    def proxy_bypass(host):
        """Return True, if the host should be bypassed.

        Checks proxy settings gathered from the environment, if specified,
        or the registry.
        """
        proxies = getproxies_environment()
        if proxies:
            return proxy_bypass_environment(host, proxies)
        else:
            return proxy_bypass_registry(host)

else:
    # By default use environment variables
    getproxies = getproxies_environment
    proxy_bypass = proxy_bypass_environment

# Test and time quote() and unquote()
def test1():
    s = ''
    for i in range(256): s = s + chr(i)
    s = s*4
    t0 = time.time()
    qs = quote(s)
    uqs = unquote(qs)
    t1 = time.time()
    if uqs != s:
        print 'Wrong!'
    print repr(s)
    print repr(qs)
    print repr(uqs)
    print round(t1 - t0, 3), 'sec'


def reporthook(blocknum, blocksize, totalsize):
    # Report during remote transfers
    print "Block number: %d, Block size: %d, Total size: %d" % (
        blocknum, blocksize, totalsize)
�
zfc@s/dZddlZdejfd��YZdS(sAA more or less complete user-defined wrapper around list objects.i����NtUserListcBs:eZd!d�Zd�Zd�Zd�Zd�Zd�Zd�Z	d�Z
d�Zd	�Zd!Z
d
�Zd�Zd�Zd
�Zd�Zd�Zd�Zd�Zd�Zd�Zd�Zd�ZeZd�Zd�Zd�Zdd�Zd�Zd�Z d�Z!d�Z"d�Z#d �Z$RS("cCssg|_|dk	rot|�t|j�kr=||j(qot|t�r]|j|j(qot|�|_ndS(N(tdatatNonettypet
isinstanceRtlist(tselftinitlist((s /usr/lib64/python2.7/UserList.pyt__init__s	
cCs
t|j�S(N(treprR(R((s /usr/lib64/python2.7/UserList.pyt__repr__tcCs|j|j|�kS(N(Rt_UserList__cast(Rtother((s /usr/lib64/python2.7/UserList.pyt__lt__RcCs|j|j|�kS(N(RR(RR
((s /usr/lib64/python2.7/UserList.pyt__le__RcCs|j|j|�kS(N(RR(RR
((s /usr/lib64/python2.7/UserList.pyt__eq__RcCs|j|j|�kS(N(RR(RR
((s /usr/lib64/python2.7/UserList.pyt__ne__RcCs|j|j|�kS(N(RR(RR
((s /usr/lib64/python2.7/UserList.pyt__gt__RcCs|j|j|�kS(N(RR(RR
((s /usr/lib64/python2.7/UserList.pyt__ge__RcCst|t�r|jS|SdS(N(RRR(RR
((s /usr/lib64/python2.7/UserList.pyt__castscCst|j|j|��S(N(tcmpRR(RR
((s /usr/lib64/python2.7/UserList.pyt__cmp__scCs
||jkS(N(R(Rtitem((s /usr/lib64/python2.7/UserList.pyt__contains__RcCs
t|j�S(N(tlenR(R((s /usr/lib64/python2.7/UserList.pyt__len__RcCs|j|S(N(R(Rti((s /usr/lib64/python2.7/UserList.pyt__getitem__RcCs||j|<dS(N(R(RRR((s /usr/lib64/python2.7/UserList.pyt__setitem__ RcCs|j|=dS(N(R(RR((s /usr/lib64/python2.7/UserList.pyt__delitem__!RcCs5t|d�}t|d�}|j|j||!�S(Ni(tmaxt	__class__R(RRtj((s /usr/lib64/python2.7/UserList.pyt__getslice__"scCs�t|d�}t|d�}t|t�rC|j|j||+nAt|t|j��rn||j||+nt|�|j||+dS(Ni(RRRRRR(RRR!R
((s /usr/lib64/python2.7/UserList.pyt__setslice__%scCs/t|d�}t|d�}|j||5dS(Ni(RR(RRR!((s /usr/lib64/python2.7/UserList.pyt__delslice__-scCspt|t�r&|j|j|j�St|t|j��rR|j|j|�S|j|jt|��SdS(N(RRR RRR(RR
((s /usr/lib64/python2.7/UserList.pyt__add__0s
cCspt|t�r&|j|j|j�St|t|j��rR|j||j�S|jt|�|j�SdS(N(RRR RRR(RR
((s /usr/lib64/python2.7/UserList.pyt__radd__7s
cCsgt|t�r$|j|j7_n?t|t|j��rN|j|7_n|jt|�7_|S(N(RRRRR(RR
((s /usr/lib64/python2.7/UserList.pyt__iadd__>scCs|j|j|�S(N(R R(Rtn((s /usr/lib64/python2.7/UserList.pyt__mul__FscCs|j|9_|S(N(R(RR(((s /usr/lib64/python2.7/UserList.pyt__imul__IscCs|jj|�dS(N(Rtappend(RR((s /usr/lib64/python2.7/UserList.pyR+LRcCs|jj||�dS(N(Rtinsert(RRR((s /usr/lib64/python2.7/UserList.pyR,MRi����cCs|jj|�S(N(Rtpop(RR((s /usr/lib64/python2.7/UserList.pyR-NRcCs|jj|�dS(N(Rtremove(RR((s /usr/lib64/python2.7/UserList.pyR.ORcCs|jj|�S(N(Rtcount(RR((s /usr/lib64/python2.7/UserList.pyR/PRcGs|jj||�S(N(Rtindex(RRtargs((s /usr/lib64/python2.7/UserList.pyR0QRcCs|jj�dS(N(Rtreverse(R((s /usr/lib64/python2.7/UserList.pyR2RRcOs|jj||�dS(N(Rtsort(RR1tkwds((s /usr/lib64/python2.7/UserList.pyR3SRcCs9t|t�r%|jj|j�n|jj|�dS(N(RRRtextend(RR
((s /usr/lib64/python2.7/UserList.pyR5TsN(%t__name__t
__module__RRR
RRRRRRRRt__hash__RRRRRR"R#R$R%R&R'R)t__rmul__R*R+R,R-R.R/R0R2R3R5(((s /usr/lib64/python2.7/UserList.pyRsD
																													(t__doc__tcollectionstMutableSequenceR(((s /usr/lib64/python2.7/UserList.pyt<module>s# This file is generated by mkstringprep.py. DO NOT EDIT.
"""Library that exposes various tables found in the StringPrep RFC 3454.

There are two kinds of tables: sets, for which a member test is provided,
and mappings, for which a mapping function is provided.
"""

from unicodedata import ucd_3_2_0 as unicodedata

assert unicodedata.unidata_version == '3.2.0'

def in_table_a1(code):
    if unicodedata.category(code) != 'Cn': return False
    c = ord(code)
    if 0xFDD0 <= c < 0xFDF0: return False
    return (c & 0xFFFF) not in (0xFFFE, 0xFFFF)


b1_set = set([173, 847, 6150, 6155, 6156, 6157, 8203, 8204, 8205, 8288, 65279] + range(65024,65040))
def in_table_b1(code):
    return ord(code) in b1_set


b3_exceptions = {
0xb5:u'\u03bc', 0xdf:u'ss', 0x130:u'i\u0307', 0x149:u'\u02bcn',
0x17f:u's', 0x1f0:u'j\u030c', 0x345:u'\u03b9', 0x37a:u' \u03b9',
0x390:u'\u03b9\u0308\u0301', 0x3b0:u'\u03c5\u0308\u0301', 0x3c2:u'\u03c3', 0x3d0:u'\u03b2',
0x3d1:u'\u03b8', 0x3d2:u'\u03c5', 0x3d3:u'\u03cd', 0x3d4:u'\u03cb',
0x3d5:u'\u03c6', 0x3d6:u'\u03c0', 0x3f0:u'\u03ba', 0x3f1:u'\u03c1',
0x3f2:u'\u03c3', 0x3f5:u'\u03b5', 0x587:u'\u0565\u0582', 0x1e96:u'h\u0331',
0x1e97:u't\u0308', 0x1e98:u'w\u030a', 0x1e99:u'y\u030a', 0x1e9a:u'a\u02be',
0x1e9b:u'\u1e61', 0x1f50:u'\u03c5\u0313', 0x1f52:u'\u03c5\u0313\u0300', 0x1f54:u'\u03c5\u0313\u0301',
0x1f56:u'\u03c5\u0313\u0342', 0x1f80:u'\u1f00\u03b9', 0x1f81:u'\u1f01\u03b9', 0x1f82:u'\u1f02\u03b9',
0x1f83:u'\u1f03\u03b9', 0x1f84:u'\u1f04\u03b9', 0x1f85:u'\u1f05\u03b9', 0x1f86:u'\u1f06\u03b9',
0x1f87:u'\u1f07\u03b9', 0x1f88:u'\u1f00\u03b9', 0x1f89:u'\u1f01\u03b9', 0x1f8a:u'\u1f02\u03b9',
0x1f8b:u'\u1f03\u03b9', 0x1f8c:u'\u1f04\u03b9', 0x1f8d:u'\u1f05\u03b9', 0x1f8e:u'\u1f06\u03b9',
0x1f8f:u'\u1f07\u03b9', 0x1f90:u'\u1f20\u03b9', 0x1f91:u'\u1f21\u03b9', 0x1f92:u'\u1f22\u03b9',
0x1f93:u'\u1f23\u03b9', 0x1f94:u'\u1f24\u03b9', 0x1f95:u'\u1f25\u03b9', 0x1f96:u'\u1f26\u03b9',
0x1f97:u'\u1f27\u03b9', 0x1f98:u'\u1f20\u03b9', 0x1f99:u'\u1f21\u03b9', 0x1f9a:u'\u1f22\u03b9',
0x1f9b:u'\u1f23\u03b9', 0x1f9c:u'\u1f24\u03b9', 0x1f9d:u'\u1f25\u03b9', 0x1f9e:u'\u1f26\u03b9',
0x1f9f:u'\u1f27\u03b9', 0x1fa0:u'\u1f60\u03b9', 0x1fa1:u'\u1f61\u03b9', 0x1fa2:u'\u1f62\u03b9',
0x1fa3:u'\u1f63\u03b9', 0x1fa4:u'\u1f64\u03b9', 0x1fa5:u'\u1f65\u03b9', 0x1fa6:u'\u1f66\u03b9',
0x1fa7:u'\u1f67\u03b9', 0x1fa8:u'\u1f60\u03b9', 0x1fa9:u'\u1f61\u03b9', 0x1faa:u'\u1f62\u03b9',
0x1fab:u'\u1f63\u03b9', 0x1fac:u'\u1f64\u03b9', 0x1fad:u'\u1f65\u03b9', 0x1fae:u'\u1f66\u03b9',
0x1faf:u'\u1f67\u03b9', 0x1fb2:u'\u1f70\u03b9', 0x1fb3:u'\u03b1\u03b9', 0x1fb4:u'\u03ac\u03b9',
0x1fb6:u'\u03b1\u0342', 0x1fb7:u'\u03b1\u0342\u03b9', 0x1fbc:u'\u03b1\u03b9', 0x1fbe:u'\u03b9',
0x1fc2:u'\u1f74\u03b9', 0x1fc3:u'\u03b7\u03b9', 0x1fc4:u'\u03ae\u03b9', 0x1fc6:u'\u03b7\u0342',
0x1fc7:u'\u03b7\u0342\u03b9', 0x1fcc:u'\u03b7\u03b9', 0x1fd2:u'\u03b9\u0308\u0300', 0x1fd3:u'\u03b9\u0308\u0301',
0x1fd6:u'\u03b9\u0342', 0x1fd7:u'\u03b9\u0308\u0342', 0x1fe2:u'\u03c5\u0308\u0300', 0x1fe3:u'\u03c5\u0308\u0301',
0x1fe4:u'\u03c1\u0313', 0x1fe6:u'\u03c5\u0342', 0x1fe7:u'\u03c5\u0308\u0342', 0x1ff2:u'\u1f7c\u03b9',
0x1ff3:u'\u03c9\u03b9', 0x1ff4:u'\u03ce\u03b9', 0x1ff6:u'\u03c9\u0342', 0x1ff7:u'\u03c9\u0342\u03b9',
0x1ffc:u'\u03c9\u03b9', 0x20a8:u'rs', 0x2102:u'c', 0x2103:u'\xb0c',
0x2107:u'\u025b', 0x2109:u'\xb0f', 0x210b:u'h', 0x210c:u'h',
0x210d:u'h', 0x2110:u'i', 0x2111:u'i', 0x2112:u'l',
0x2115:u'n', 0x2116:u'no', 0x2119:u'p', 0x211a:u'q',
0x211b:u'r', 0x211c:u'r', 0x211d:u'r', 0x2120:u'sm',
0x2121:u'tel', 0x2122:u'tm', 0x2124:u'z', 0x2128:u'z',
0x212c:u'b', 0x212d:u'c', 0x2130:u'e', 0x2131:u'f',
0x2133:u'm', 0x213e:u'\u03b3', 0x213f:u'\u03c0', 0x2145:u'd',
0x3371:u'hpa', 0x3373:u'au', 0x3375:u'ov', 0x3380:u'pa',
0x3381:u'na', 0x3382:u'\u03bca', 0x3383:u'ma', 0x3384:u'ka',
0x3385:u'kb', 0x3386:u'mb', 0x3387:u'gb', 0x338a:u'pf',
0x338b:u'nf', 0x338c:u'\u03bcf', 0x3390:u'hz', 0x3391:u'khz',
0x3392:u'mhz', 0x3393:u'ghz', 0x3394:u'thz', 0x33a9:u'pa',
0x33aa:u'kpa', 0x33ab:u'mpa', 0x33ac:u'gpa', 0x33b4:u'pv',
0x33b5:u'nv', 0x33b6:u'\u03bcv', 0x33b7:u'mv', 0x33b8:u'kv',
0x33b9:u'mv', 0x33ba:u'pw', 0x33bb:u'nw', 0x33bc:u'\u03bcw',
0x33bd:u'mw', 0x33be:u'kw', 0x33bf:u'mw', 0x33c0:u'k\u03c9',
0x33c1:u'm\u03c9', 0x33c3:u'bq', 0x33c6:u'c\u2215kg', 0x33c7:u'co.',
0x33c8:u'db', 0x33c9:u'gy', 0x33cb:u'hp', 0x33cd:u'kk',
0x33ce:u'km', 0x33d7:u'ph', 0x33d9:u'ppm', 0x33da:u'pr',
0x33dc:u'sv', 0x33dd:u'wb', 0xfb00:u'ff', 0xfb01:u'fi',
0xfb02:u'fl', 0xfb03:u'ffi', 0xfb04:u'ffl', 0xfb05:u'st',
0xfb06:u'st', 0xfb13:u'\u0574\u0576', 0xfb14:u'\u0574\u0565', 0xfb15:u'\u0574\u056b',
0xfb16:u'\u057e\u0576', 0xfb17:u'\u0574\u056d', 0x1d400:u'a', 0x1d401:u'b',
0x1d402:u'c', 0x1d403:u'd', 0x1d404:u'e', 0x1d405:u'f',
0x1d406:u'g', 0x1d407:u'h', 0x1d408:u'i', 0x1d409:u'j',
0x1d40a:u'k', 0x1d40b:u'l', 0x1d40c:u'm', 0x1d40d:u'n',
0x1d40e:u'o', 0x1d40f:u'p', 0x1d410:u'q', 0x1d411:u'r',
0x1d412:u's', 0x1d413:u't', 0x1d414:u'u', 0x1d415:u'v',
0x1d416:u'w', 0x1d417:u'x', 0x1d418:u'y', 0x1d419:u'z',
0x1d434:u'a', 0x1d435:u'b', 0x1d436:u'c', 0x1d437:u'd',
0x1d438:u'e', 0x1d439:u'f', 0x1d43a:u'g', 0x1d43b:u'h',
0x1d43c:u'i', 0x1d43d:u'j', 0x1d43e:u'k', 0x1d43f:u'l',
0x1d440:u'm', 0x1d441:u'n', 0x1d442:u'o', 0x1d443:u'p',
0x1d444:u'q', 0x1d445:u'r', 0x1d446:u's', 0x1d447:u't',
0x1d448:u'u', 0x1d449:u'v', 0x1d44a:u'w', 0x1d44b:u'x',
0x1d44c:u'y', 0x1d44d:u'z', 0x1d468:u'a', 0x1d469:u'b',
0x1d46a:u'c', 0x1d46b:u'd', 0x1d46c:u'e', 0x1d46d:u'f',
0x1d46e:u'g', 0x1d46f:u'h', 0x1d470:u'i', 0x1d471:u'j',
0x1d472:u'k', 0x1d473:u'l', 0x1d474:u'm', 0x1d475:u'n',
0x1d476:u'o', 0x1d477:u'p', 0x1d478:u'q', 0x1d479:u'r',
0x1d47a:u's', 0x1d47b:u't', 0x1d47c:u'u', 0x1d47d:u'v',
0x1d47e:u'w', 0x1d47f:u'x', 0x1d480:u'y', 0x1d481:u'z',
0x1d49c:u'a', 0x1d49e:u'c', 0x1d49f:u'd', 0x1d4a2:u'g',
0x1d4a5:u'j', 0x1d4a6:u'k', 0x1d4a9:u'n', 0x1d4aa:u'o',
0x1d4ab:u'p', 0x1d4ac:u'q', 0x1d4ae:u's', 0x1d4af:u't',
0x1d4b0:u'u', 0x1d4b1:u'v', 0x1d4b2:u'w', 0x1d4b3:u'x',
0x1d4b4:u'y', 0x1d4b5:u'z', 0x1d4d0:u'a', 0x1d4d1:u'b',
0x1d4d2:u'c', 0x1d4d3:u'd', 0x1d4d4:u'e', 0x1d4d5:u'f',
0x1d4d6:u'g', 0x1d4d7:u'h', 0x1d4d8:u'i', 0x1d4d9:u'j',
0x1d4da:u'k', 0x1d4db:u'l', 0x1d4dc:u'm', 0x1d4dd:u'n',
0x1d4de:u'o', 0x1d4df:u'p', 0x1d4e0:u'q', 0x1d4e1:u'r',
0x1d4e2:u's', 0x1d4e3:u't', 0x1d4e4:u'u', 0x1d4e5:u'v',
0x1d4e6:u'w', 0x1d4e7:u'x', 0x1d4e8:u'y', 0x1d4e9:u'z',
0x1d504:u'a', 0x1d505:u'b', 0x1d507:u'd', 0x1d508:u'e',
0x1d509:u'f', 0x1d50a:u'g', 0x1d50d:u'j', 0x1d50e:u'k',
0x1d50f:u'l', 0x1d510:u'm', 0x1d511:u'n', 0x1d512:u'o',
0x1d513:u'p', 0x1d514:u'q', 0x1d516:u's', 0x1d517:u't',
0x1d518:u'u', 0x1d519:u'v', 0x1d51a:u'w', 0x1d51b:u'x',
0x1d51c:u'y', 0x1d538:u'a', 0x1d539:u'b', 0x1d53b:u'd',
0x1d53c:u'e', 0x1d53d:u'f', 0x1d53e:u'g', 0x1d540:u'i',
0x1d541:u'j', 0x1d542:u'k', 0x1d543:u'l', 0x1d544:u'm',
0x1d546:u'o', 0x1d54a:u's', 0x1d54b:u't', 0x1d54c:u'u',
0x1d54d:u'v', 0x1d54e:u'w', 0x1d54f:u'x', 0x1d550:u'y',
0x1d56c:u'a', 0x1d56d:u'b', 0x1d56e:u'c', 0x1d56f:u'd',
0x1d570:u'e', 0x1d571:u'f', 0x1d572:u'g', 0x1d573:u'h',
0x1d574:u'i', 0x1d575:u'j', 0x1d576:u'k', 0x1d577:u'l',
0x1d578:u'm', 0x1d579:u'n', 0x1d57a:u'o', 0x1d57b:u'p',
0x1d57c:u'q', 0x1d57d:u'r', 0x1d57e:u's', 0x1d57f:u't',
0x1d580:u'u', 0x1d581:u'v', 0x1d582:u'w', 0x1d583:u'x',
0x1d584:u'y', 0x1d585:u'z', 0x1d5a0:u'a', 0x1d5a1:u'b',
0x1d5a2:u'c', 0x1d5a3:u'd', 0x1d5a4:u'e', 0x1d5a5:u'f',
0x1d5a6:u'g', 0x1d5a7:u'h', 0x1d5a8:u'i', 0x1d5a9:u'j',
0x1d5aa:u'k', 0x1d5ab:u'l', 0x1d5ac:u'm', 0x1d5ad:u'n',
0x1d5ae:u'o', 0x1d5af:u'p', 0x1d5b0:u'q', 0x1d5b1:u'r',
0x1d5b2:u's', 0x1d5b3:u't', 0x1d5b4:u'u', 0x1d5b5:u'v',
0x1d5b6:u'w', 0x1d5b7:u'x', 0x1d5b8:u'y', 0x1d5b9:u'z',
0x1d5d4:u'a', 0x1d5d5:u'b', 0x1d5d6:u'c', 0x1d5d7:u'd',
0x1d5d8:u'e', 0x1d5d9:u'f', 0x1d5da:u'g', 0x1d5db:u'h',
0x1d5dc:u'i', 0x1d5dd:u'j', 0x1d5de:u'k', 0x1d5df:u'l',
0x1d5e0:u'm', 0x1d5e1:u'n', 0x1d5e2:u'o', 0x1d5e3:u'p',
0x1d5e4:u'q', 0x1d5e5:u'r', 0x1d5e6:u's', 0x1d5e7:u't',
0x1d5e8:u'u', 0x1d5e9:u'v', 0x1d5ea:u'w', 0x1d5eb:u'x',
0x1d5ec:u'y', 0x1d5ed:u'z', 0x1d608:u'a', 0x1d609:u'b',
0x1d60a:u'c', 0x1d60b:u'd', 0x1d60c:u'e', 0x1d60d:u'f',
0x1d60e:u'g', 0x1d60f:u'h', 0x1d610:u'i', 0x1d611:u'j',
0x1d612:u'k', 0x1d613:u'l', 0x1d614:u'm', 0x1d615:u'n',
0x1d616:u'o', 0x1d617:u'p', 0x1d618:u'q', 0x1d619:u'r',
0x1d61a:u's', 0x1d61b:u't', 0x1d61c:u'u', 0x1d61d:u'v',
0x1d61e:u'w', 0x1d61f:u'x', 0x1d620:u'y', 0x1d621:u'z',
0x1d63c:u'a', 0x1d63d:u'b', 0x1d63e:u'c', 0x1d63f:u'd',
0x1d640:u'e', 0x1d641:u'f', 0x1d642:u'g', 0x1d643:u'h',
0x1d644:u'i', 0x1d645:u'j', 0x1d646:u'k', 0x1d647:u'l',
0x1d648:u'm', 0x1d649:u'n', 0x1d64a:u'o', 0x1d64b:u'p',
0x1d64c:u'q', 0x1d64d:u'r', 0x1d64e:u's', 0x1d64f:u't',
0x1d650:u'u', 0x1d651:u'v', 0x1d652:u'w', 0x1d653:u'x',
0x1d654:u'y', 0x1d655:u'z', 0x1d670:u'a', 0x1d671:u'b',
0x1d672:u'c', 0x1d673:u'd', 0x1d674:u'e', 0x1d675:u'f',
0x1d676:u'g', 0x1d677:u'h', 0x1d678:u'i', 0x1d679:u'j',
0x1d67a:u'k', 0x1d67b:u'l', 0x1d67c:u'm', 0x1d67d:u'n',
0x1d67e:u'o', 0x1d67f:u'p', 0x1d680:u'q', 0x1d681:u'r',
0x1d682:u's', 0x1d683:u't', 0x1d684:u'u', 0x1d685:u'v',
0x1d686:u'w', 0x1d687:u'x', 0x1d688:u'y', 0x1d689:u'z',
0x1d6a8:u'\u03b1', 0x1d6a9:u'\u03b2', 0x1d6aa:u'\u03b3', 0x1d6ab:u'\u03b4',
0x1d6ac:u'\u03b5', 0x1d6ad:u'\u03b6', 0x1d6ae:u'\u03b7', 0x1d6af:u'\u03b8',
0x1d6b0:u'\u03b9', 0x1d6b1:u'\u03ba', 0x1d6b2:u'\u03bb', 0x1d6b3:u'\u03bc',
0x1d6b4:u'\u03bd', 0x1d6b5:u'\u03be', 0x1d6b6:u'\u03bf', 0x1d6b7:u'\u03c0',
0x1d6b8:u'\u03c1', 0x1d6b9:u'\u03b8', 0x1d6ba:u'\u03c3', 0x1d6bb:u'\u03c4',
0x1d6bc:u'\u03c5', 0x1d6bd:u'\u03c6', 0x1d6be:u'\u03c7', 0x1d6bf:u'\u03c8',
0x1d6c0:u'\u03c9', 0x1d6d3:u'\u03c3', 0x1d6e2:u'\u03b1', 0x1d6e3:u'\u03b2',
0x1d6e4:u'\u03b3', 0x1d6e5:u'\u03b4', 0x1d6e6:u'\u03b5', 0x1d6e7:u'\u03b6',
0x1d6e8:u'\u03b7', 0x1d6e9:u'\u03b8', 0x1d6ea:u'\u03b9', 0x1d6eb:u'\u03ba',
0x1d6ec:u'\u03bb', 0x1d6ed:u'\u03bc', 0x1d6ee:u'\u03bd', 0x1d6ef:u'\u03be',
0x1d6f0:u'\u03bf', 0x1d6f1:u'\u03c0', 0x1d6f2:u'\u03c1', 0x1d6f3:u'\u03b8',
0x1d6f4:u'\u03c3', 0x1d6f5:u'\u03c4', 0x1d6f6:u'\u03c5', 0x1d6f7:u'\u03c6',
0x1d6f8:u'\u03c7', 0x1d6f9:u'\u03c8', 0x1d6fa:u'\u03c9', 0x1d70d:u'\u03c3',
0x1d71c:u'\u03b1', 0x1d71d:u'\u03b2', 0x1d71e:u'\u03b3', 0x1d71f:u'\u03b4',
0x1d720:u'\u03b5', 0x1d721:u'\u03b6', 0x1d722:u'\u03b7', 0x1d723:u'\u03b8',
0x1d724:u'\u03b9', 0x1d725:u'\u03ba', 0x1d726:u'\u03bb', 0x1d727:u'\u03bc',
0x1d728:u'\u03bd', 0x1d729:u'\u03be', 0x1d72a:u'\u03bf', 0x1d72b:u'\u03c0',
0x1d72c:u'\u03c1', 0x1d72d:u'\u03b8', 0x1d72e:u'\u03c3', 0x1d72f:u'\u03c4',
0x1d730:u'\u03c5', 0x1d731:u'\u03c6', 0x1d732:u'\u03c7', 0x1d733:u'\u03c8',
0x1d734:u'\u03c9', 0x1d747:u'\u03c3', 0x1d756:u'\u03b1', 0x1d757:u'\u03b2',
0x1d758:u'\u03b3', 0x1d759:u'\u03b4', 0x1d75a:u'\u03b5', 0x1d75b:u'\u03b6',
0x1d75c:u'\u03b7', 0x1d75d:u'\u03b8', 0x1d75e:u'\u03b9', 0x1d75f:u'\u03ba',
0x1d760:u'\u03bb', 0x1d761:u'\u03bc', 0x1d762:u'\u03bd', 0x1d763:u'\u03be',
0x1d764:u'\u03bf', 0x1d765:u'\u03c0', 0x1d766:u'\u03c1', 0x1d767:u'\u03b8',
0x1d768:u'\u03c3', 0x1d769:u'\u03c4', 0x1d76a:u'\u03c5', 0x1d76b:u'\u03c6',
0x1d76c:u'\u03c7', 0x1d76d:u'\u03c8', 0x1d76e:u'\u03c9', 0x1d781:u'\u03c3',
0x1d790:u'\u03b1', 0x1d791:u'\u03b2', 0x1d792:u'\u03b3', 0x1d793:u'\u03b4',
0x1d794:u'\u03b5', 0x1d795:u'\u03b6', 0x1d796:u'\u03b7', 0x1d797:u'\u03b8',
0x1d798:u'\u03b9', 0x1d799:u'\u03ba', 0x1d79a:u'\u03bb', 0x1d79b:u'\u03bc',
0x1d79c:u'\u03bd', 0x1d79d:u'\u03be', 0x1d79e:u'\u03bf', 0x1d79f:u'\u03c0',
0x1d7a0:u'\u03c1', 0x1d7a1:u'\u03b8', 0x1d7a2:u'\u03c3', 0x1d7a3:u'\u03c4',
0x1d7a4:u'\u03c5', 0x1d7a5:u'\u03c6', 0x1d7a6:u'\u03c7', 0x1d7a7:u'\u03c8',
0x1d7a8:u'\u03c9', 0x1d7bb:u'\u03c3', }

def map_table_b3(code):
    r = b3_exceptions.get(ord(code))
    if r is not None: return r
    return code.lower()


def map_table_b2(a):
    al = map_table_b3(a)
    b = unicodedata.normalize("NFKC", al)
    bl = u"".join([map_table_b3(ch) for ch in b])
    c = unicodedata.normalize("NFKC", bl)
    if b != c:
        return c
    else:
        return al


def in_table_c11(code):
    return code == u" "


def in_table_c12(code):
    return unicodedata.category(code) == "Zs" and code != u" "

def in_table_c11_c12(code):
    return unicodedata.category(code) == "Zs"


def in_table_c21(code):
    return ord(code) < 128 and unicodedata.category(code) == "Cc"

c22_specials = set([1757, 1807, 6158, 8204, 8205, 8232, 8233, 65279] + range(8288,8292) + range(8298,8304) + range(65529,65533) + range(119155,119163))
def in_table_c22(code):
    c = ord(code)
    if c < 128: return False
    if unicodedata.category(code) == "Cc": return True
    return c in c22_specials

def in_table_c21_c22(code):
    return unicodedata.category(code) == "Cc" or \
           ord(code) in c22_specials


def in_table_c3(code):
    return unicodedata.category(code) == "Co"


def in_table_c4(code):
    c = ord(code)
    if c < 0xFDD0: return False
    if c < 0xFDF0: return True
    return (ord(code) & 0xFFFF) in (0xFFFE, 0xFFFF)


def in_table_c5(code):
    return unicodedata.category(code) == "Cs"


c6_set = set(range(65529,65534))
def in_table_c6(code):
    return ord(code) in c6_set


c7_set = set(range(12272,12284))
def in_table_c7(code):
    return ord(code) in c7_set


c8_set = set([832, 833, 8206, 8207] + range(8234,8239) + range(8298,8304))
def in_table_c8(code):
    return ord(code) in c8_set


c9_set = set([917505] + range(917536,917632))
def in_table_c9(code):
    return ord(code) in c9_set


def in_table_d1(code):
    return unicodedata.bidirectional(code) in ("R","AL")


def in_table_d2(code):
    return unicodedata.bidirectional(code) == "L"
�
zfc@s|dZyddlmZWnek
r3dZnXdgZd�Zdd
d��YZd�Zedkrxe�nd	S(s
File-like objects that read from or write to a string buffer.

This implements (nearly) all stdio methods.

f = StringIO()      # ready for writing
f = StringIO(buf)   # ready for reading
f.close()           # explicitly release resources held
flag = f.isatty()   # always false
pos = f.tell()      # get current position
f.seek(pos)         # set current position
f.seek(pos, mode)   # mode 0: absolute; 1: relative; 2: relative to EOF
buf = f.read()      # read until EOF
buf = f.read(n)     # read up to n bytes
buf = f.readline()  # read until end of line ('\n') or EOF
list = f.readlines()# list of f.readline() results until EOF
f.truncate([size])  # truncate file at to at most size (default: current pos)
f.write(buf)        # write at current position
f.writelines(list)  # for line in list: f.write(line)
f.getvalue()        # return whole file's contents as a string

Notes:
- Using a real file is often faster (but less convenient).
- There's also a much faster implementation in C, called cStringIO, but
  it's not subclassable.
- fileno() is left unimplemented so that code which uses it triggers
  an exception early.
- Seeking far beyond EOF and then writing will insert real null
  bytes that occupy space in the buffer.
- There's a simple test set (see end of this file).
i����(tEINVALitStringIOcCs|rtd�ndS(NsI/O operation on closed file(t
ValueError(tclosed((s /usr/lib64/python2.7/StringIO.pyt_complain_ifclosed&scBs�eZdZdd�Zd�Zd�Zd�Zd�Zdd�Zd	�Z	d
d�Z
dd�Zdd
�Z
dd�Zd�Zd�Zd�Zd�ZRS(s�class StringIO([buffer])

    When a StringIO object is created, it can be initialized to an existing
    string by passing the string to the constructor. If no string is given,
    the StringIO will start empty.

    The StringIO object can accept either Unicode or 8-bit strings, but
    mixing the two may take some care. If both are used, 8-bit strings that
    cannot be interpreted as 7-bit ASCII (that use the 8th bit) will cause
    a UnicodeError to be raised when getvalue() is called.
    tcCs^t|t�st|�}n||_t|�|_g|_d|_t|_d|_	dS(Ni(
t
isinstancet
basestringtstrtbuftlentbuflisttpostFalseRt	softspace(tselfR	((s /usr/lib64/python2.7/StringIO.pyt__init__6s				cCs|S(N((R((s /usr/lib64/python2.7/StringIO.pyt__iter__AscCs,t|j�|j�}|s(t�n|S(s_A file object is its own iterator, for example iter(f) returns f
        (unless f is closed). When a file is used as an iterator, typically
        in a for loop (for example, for line in f: print line), the next()
        method is called repeatedly. This method returns the next input line,
        or raises StopIteration when EOF is hit.
        (RRtreadlinet
StopIteration(Rtr((s /usr/lib64/python2.7/StringIO.pytnextDs

	cCs%|js!t|_|`|`ndS(s Free the memory buffer.
        N(RtTrueR	R(R((s /usr/lib64/python2.7/StringIO.pytcloseQs		cCst|j�tS(s_Returns False because StringIO objects are not connected to a
        tty-like device.
        (RRR
(R((s /usr/lib64/python2.7/StringIO.pytisattyXs
icCs�t|j�|jr=|jdj|j�7_g|_n|dkrY||j7}n|dkru||j7}ntd|�|_dS(sSet the file's current position.

        The mode argument is optional and defaults to 0 (absolute file
        positioning); other values are 1 (seek relative to the current
        position) and 2 (seek relative to the file's end).

        There is no return value.
        RiiiN(RRRR	tjoinRR
tmax(RRtmode((s /usr/lib64/python2.7/StringIO.pytseek_s	
	cCst|j�|jS(s#Return the file's current position.(RRR(R((s /usr/lib64/python2.7/StringIO.pyttellrs
i����cCs�t|j�|jr=|jdj|j�7_g|_n|dksU|dkra|j}nt|j||j�}|j|j|!}||_|S(sERead at most size bytes from the file
        (less if the read hits EOF before obtaining size bytes).

        If the size argument is negative or omitted, read all data until EOF
        is reached. The bytes are returned as a string object. An empty
        string is returned when EOF is encountered immediately.
        RiN(	RRRR	RtNoneR
tminR(RtntnewposR((s /usr/lib64/python2.7/StringIO.pytreadws
		cCs�t|j�|jr=|jdj|j�7_g|_n|jjd|j�}|dkrm|j}n
|d}|dk	r�|dkr�|j||kr�|j|}q�n|j|j|!}||_|S(s%Read one entire line from the file.

        A trailing newline character is kept in the string (but may be absent
        when a file ends with an incomplete line). If the size argument is
        present and non-negative, it is a maximum byte count (including the
        trailing newline) and an incomplete line may be returned.

        An empty string is returned only when EOF is encountered immediately.

        Note: Unlike stdio's fgets(), the returned string contains null
        characters ('\0') if they occurred in the input.
        Rs
iiN(	RRRR	RtfindRR
R(RtlengthtiR!R((s /usr/lib64/python2.7/StringIO.pyR�s

	
	cCsrd}g}|j�}xS|rm|j|�|t|�7}d|koU|knr^Pn|j�}qW|S(s'Read until EOF using readline() and return a list containing the
        lines thus read.

        If the optional sizehint argument is present, instead of reading up
        to EOF, whole lines totalling approximately sizehint bytes (or more
        to accommodate a final whole line).
        i(RtappendR
(Rtsizehintttotaltlinestline((s /usr/lib64/python2.7/StringIO.pyt	readlines�s	
cCs~t|j�|dkr%|j}n9|dkrCttd��n||jkr^||_n|j�| |_||_dS(s�Truncate the file's size.

        If the optional size argument is present, the file is truncated to
        (at most) that size. The size defaults to the current position.
        The current file position is not changed unless the position
        is beyond the new file size.

        If the specified size exceeds the file's current size, the
        file remains unchanged.
        isNegative size not allowedN(	RRRRtIOErrorRtgetvalueR	R
(Rtsize((s /usr/lib64/python2.7/StringIO.pyttruncate�s
cCs^t|j�|sdSt|t�s5t|�}n|j}|j}||kr�|jj|�|t|�|_|_dS||kr�|jjd||�|}n|t|�}||kr2|jr�|j	dj
|j�7_	n|j	| ||j	|g|_d|_	||krH|}qHn|jj|�|}||_||_dS(sGWrite a string to the file.

        There is no return value.
        NsR(RRRRRRR
RR&R	R(RtstspostslenR!((s /usr/lib64/python2.7/StringIO.pytwrite�s4
				 		cCs(|j}x|D]}||�qWdS(sWrite a sequence of strings to the file. The sequence can be any
        iterable object producing strings, typically a list of strings. There
        is no return value.

        (The name is intended to match readlines(); writelines() does not add
        line separators.)
        N(R3(RtiterableR3R*((s /usr/lib64/python2.7/StringIO.pyt
writelines�s	
cCst|j�dS(s"Flush the internal buffer
        N(RR(R((s /usr/lib64/python2.7/StringIO.pytflush�scCsDt|j�|jr=|jdj|j�7_g|_n|jS(s�
        Retrieve the entire contents of the "file" at any time before
        the StringIO object's close() method is called.

        The StringIO object can accept either Unicode or 8-bit strings,
        but mixing the two may take some care. If both are used, 8-bit
        strings that cannot be interpreted as 7-bit ASCII (that use the
        8th bit) will cause a UnicodeError to be raised when getvalue()
        is called.
        R(RRRR	R(R((s /usr/lib64/python2.7/StringIO.pyR-s

	N(t__name__t
__module__t__doc__RRRRRRRR"RRR+R/R3R5R6R-(((s /usr/lib64/python2.7/StringIO.pyR*s 		
				!		c	Cs�ddl}|jdr)|jd}nd}t|d�j�}t|d�j�}t�}x|d D]}|j|�qmW|j|d�|j�|kr�t	d�n|j
�}dG|GH|jt|d��|j|d�|jd�d	Gt
|j��GHd
G|j
�GH|j�}dGt
|�GH|jt|�d�|jt|��}||kr�t	d�n|jt|�d�|j�}|d}|j|j
�t|��|j�}||kr�t	d
�ndGt|�GdGHdG|j
�GH|j
�|kr-t	d�n|j|d�|jdd�dG|j
�GH|j
�|dkrt	d�n|j�dS(Ni����is/etc/passwdRi����swrite faileds
File length =isFirst line =s
Position =s
Second line =sbad result after seek backs#bad result after seek back from EOFtReads
more liness
bad lengthisTruncated length =struncate did not adjust length(tsystargvtopenR+R"RR3R5R-tRuntimeErrorRRR
treprRR/R(	R;tfileR)ttexttfR*R$tline2tlist((s /usr/lib64/python2.7/StringIO.pyttestsT
		

t__main__N((	R9terrnoRtImportErrort__all__RRRER7(((s /usr/lib64/python2.7/StringIO.pyt<module>s

		�	-�
zfc@sdZddddddgZddZdd	Zd
dZd�Zd�Zd
�Zd�Zd�Z	d�Z
d�ZdS(sJConversion functions between RGB and other color systems.

This modules provides two functions for each color system ABC:

  rgb_to_abc(r, g, b) --> a, b, c
  abc_to_rgb(a, b, c) --> r, g, b

All inputs and outputs are triples of floats in the range [0.0...1.0]
(with the exception of I and Q, which covers a slightly larger range).
Inputs outside the valid range may cause exceptions or invalid outputs.

Supported color systems:
RGB: Red, Green, Blue components
YIQ: Luminance, Chrominance (used by composite video signals)
HLS: Hue, Luminance, Saturation
HSV: Hue, Saturation, Value
t
rgb_to_yiqt
yiq_to_rgbt
rgb_to_hlst
hls_to_rgbt
rgb_to_hsvt
hsv_to_rgbg�?g@g@g@cCs[d|d|d|}d|d|d|}d|d|d	|}|||fS(
Ng333333�?g�z�G��?g)\��(�?g333333�?g�Q����?g{�G�z�?g�z�G��?g�p=
ף�?gףp=
��?((trtgtbtytitq((s /usr/lib64/python2.7/colorsys.pyR%scCs�|d|d|}|d|d|}|d|d|}|dkrWd}n|dkrld}n|dkr�d}n|dkr�d}n|dkr�d}n|dkr�d}n|||fS(	NgD��)X�?gS�h��?g�fb���?g��4�Ry�?g���V��?gv�ꭁ��?gg�?((R	R
RRRR((s /usr/lib64/python2.7/colorsys.pyR+s 						cCst|||�}t|||�}||d}||krKd|dfS|dkrl||||}n||d||}||||}||||}||||}	||kr�|	|}
n+||kr�d||	}
nd||}
|
dd}
|
||fS(Ng@gg�?g@g@g�?(tmaxtmin(RRRtmaxctminctltstrctgctbcth((s /usr/lib64/python2.7/colorsys.pyRCs$

cCs�|dkr|||fS|dkr6|d|}n||||}d||}t|||t�t|||�t|||t�fS(Ngg�?g�?g@(t_vt	ONE_THIRD(RRRtm2tm1((s /usr/lib64/python2.7/colorsys.pyRZs
cCsb|d}|tkr*||||dS|dkr:|S|tkr^|||t|dS|S(Ng�?g@g�?(t	ONE_SIXTHt	TWO_THIRD(RRthue((s /usr/lib64/python2.7/colorsys.pyRds
cCs�t|||�}t|||�}|}||krCdd|fS|||}||||}||||}||||}	||kr�|	|}
n+||kr�d||	}
nd||}
|
dd}
|
||fS(Ngg@g@g@g�?(RR
(RRRRRtvRRRRR((s /usr/lib64/python2.7/colorsys.pyRts 

cCs|dkr|||fSt|d�}|d|}|d|}|d||}|d|d|}|d}|dkr�|||fS|dkr�|||fS|dkr�|||fS|dkr�|||fS|d	kr�|||fS|d
kr
|||fSdS(Ngg@g�?iiiiiii(tint(RRRR
tftpRtt((s /usr/lib64/python2.7/colorsys.pyR�s(






N(t__doc__t__all__RRRRRRRRRR(((s /usr/lib64/python2.7/colorsys.pyt<module>s


				
		�
zfc@sBdZddlZddlZddlZddddddd	d
ddd
dddgZddd�Zdd�Zd�Zddd�Z	dd�Z
dd�Zddd�Zdd�Z
d�Zd�Zd�Zddd�Zdd�Zddd �Zdddd!�Zddd"�Zddd#�Zd$�ZdS(%s@Extract, format and print information about Python stack traces.i����Nt
extract_stackt
extract_tbtformat_exceptiontformat_exception_onlytformat_listtformat_stackt	format_tbt	print_exct
format_exctprint_exceptiont
print_lasttprint_stacktprint_tbt	tb_linenots
cCs|j||�dS(N(twrite(tfiletstrt
terminator((s!/usr/lib64/python2.7/traceback.pyt_printscCss|dkrtj}nxT|D]L\}}}}t|d|||f�|rt|d|j��qqWdS(syPrint the list of tuples as returned by extract_tb() or
    extract_stack() as a formatted stack trace to the given file.s  File "%s", line %d, in %ss    %sN(tNonetsyststderrRtstrip(textracted_listRtfilenametlinenotnametline((s!/usr/lib64/python2.7/traceback.pyt
print_listscCsdg}xW|D]O\}}}}d|||f}|rO|d|j�}n|j|�q
W|S(s�Format a list of traceback entry tuples for printing.

    Given a list of tuples as returned by extract_tb() or
    extract_stack(), return a list of strings ready for printing.
    Each string in the resulting list corresponds to the item with the
    same index in the argument list.  Each string ends in a newline;
    the strings may contain internal newlines as well, for those items
    whose source text line is not None.
    s  File "%s", line %d, in %s
s    %s
(Rtappend(RtlistRRRRtitem((s!/usr/lib64/python2.7/traceback.pyRs
c
Cs|dkrtj}n|dkrBttd�rBtj}qBnd}x�|dk	r|dkso||kr|j}|j}|j}|j}|j	}t
|d|||f�tj|�tj
|||j�}	|	r�t
|d|	j��n|j}|d}qKWdS(sPrint up to 'limit' stack trace entries from the traceback 'tb'.

    If 'limit' is omitted or None, all entries are printed.  If 'file'
    is omitted or None, the output goes to sys.stderr; otherwise
    'file' should be an open file or file-like object with a write()
    method.
    ttracebacklimitis  File "%s", line %d, in %ss    iN(RRRthasattrR!ttb_frameR
tf_codetco_filenametco_nameRt	linecachet
checkcachetgetlinet	f_globalsRttb_next(
ttbtlimitRtntfRtcoRRR((s!/usr/lib64/python2.7/traceback.pyR.s('					
	cCstt||��S(s5A shorthand for 'format_list(extract_tb(tb, limit))'.(RR(R,R-((s!/usr/lib64/python2.7/traceback.pyRJsc
Cs�|dkr*ttd�r*tj}q*ng}d}x�|dk	r�|dks]||kr�|j}|j}|j}|j}|j}t	j
|�t	j|||j�}	|	r�|	j
�}	nd}	|j||||	f�|j}|d}q9W|S(s�Return list of up to limit pre-processed entries from traceback.

    This is useful for alternate formatting of stack traces.  If
    'limit' is omitted or None, all entries are extracted.  A
    pre-processed stack trace entry is a quadruple (filename, line
    number, function name, text) representing the information that is
    usually printed for a stack trace.  The text is a string with
    leading and trailing whitespace stripped; if the source is not
    available it is None.
    R!iiN(RR"RR!R#R
R$R%R&R'R(R)R*RRR+(
R,R-RR.R/RR0RRR((s!/usr/lib64/python2.7/traceback.pyRNs('					
	cCsr|dkrtj}n|r>t|d�t|||�nt||�}x|D]}t||d�qTWdS(s�Print exception up to 'limit' stack trace entries from 'tb' to 'file'.

    This differs from print_tb() in the following ways: (1) if
    traceback is not None, it prints a header "Traceback (most recent
    call last):"; (2) it prints the exception type and value after the
    stack trace; (3) if type is SyntaxError and value has the
    appropriate format, it prints the line where the syntax error
    occurred with a caret on the next line indicating the approximate
    position of the error.
    s"Traceback (most recent call last):RN(RRRRRR(tetypetvalueR,R-RtlinesR((s!/usr/lib64/python2.7/traceback.pyR	ns

cCsB|r%dg}|t||�}ng}|t||�}|S(szFormat a stack trace and the exception information.

    The arguments have the same meaning as the corresponding arguments
    to print_exception().  The return value is a list of strings, each
    ending in a newline and some containing internal newlines.  When
    these lines are concatenated and printed, exactly the same text is
    printed as does print_exception().
    s#Traceback (most recent call last):
(RR(R1R2R,R-R((s!/usr/lib64/python2.7/traceback.pyR�s		c
Cs�t|t�s?t|tj�s?|d	ks?t|�tkrOt||�gS|j}t	|t
�swt||�gSg}y|j\}\}}}}Wntk
r�n�X|p�d}|j
d||f�|d	k	rj|j
d|j��|d	k	rj|jd�}	tt|	�|�d}|	| j�}	d�|	D�}	|j
ddj|	��qjn|}|j
t||��|S(
sFormat the exception part of a traceback.

    The arguments are the exception type and value such as given by
    sys.last_type and sys.last_value. The return value is a list of
    strings, each ending in a newline.

    Normally, the list contains a single string; however, for
    SyntaxError exceptions, it contains several lines that (when
    printed) display detailed information about where the syntax
    error occurred.

    The message indicating which exception occurred is always the last
    string in the list.

    s<string>s  File "%s", line %d
s    %s
s
icss'|]}|j�r|pdVqdS(t N(tisspace(t.0tc((s!/usr/lib64/python2.7/traceback.pys	<genexpr>�ss    %s^
RN(t
isinstancet
BaseExceptionttypestInstanceTypeRttypeRt_format_final_exc_linet__name__t
issubclasstSyntaxErrortargst	ExceptionRRtrstriptmintlentlstriptjoin(
R1R2tstypeR3tmsgRRtoffsettbadlinet
caretspace((s!/usr/lib64/python2.7/traceback.pyR�s2	
 cCs@t|�}|dks|r,d|}nd||f}|S(sGReturn a list of a single line -- normal case for format_exception_onlys%s
s%s: %s
N(t	_some_strR(R1R2tvaluestrR((s!/usr/lib64/python2.7/traceback.pyR=�s

cCsgyt|�SWntk
r!nXy t|�}|jdd�SWntk
rUnXdt|�jS(Ntasciitbackslashreplaces<unprintable %s object>(RRBtunicodetencodeR<R>(R2((s!/usr/lib64/python2.7/traceback.pyRM�s

cCs]|dkrtj}nz/tj�\}}}t|||||�Wdd}}}XdS(s�Shorthand for 'print_exception(sys.exc_type, sys.exc_value, sys.exc_traceback, limit, file)'.
    (In fact, it uses sys.exc_info() to retrieve the same information
    in a thread-safe way.)N(RRRtexc_infoR	(R-RR1R2R,((s!/usr/lib64/python2.7/traceback.pyR�scCsKz5tj�\}}}djt||||��SWdd}}}XdS(s%Like print_exc() but return a string.RN(RRSRGRR(R-R1R2R,((s!/usr/lib64/python2.7/traceback.pyR�s cCsYttd�std��n|dkr6tj}nttjtjtj||�dS(snThis is a shorthand for 'print_exception(sys.last_type,
    sys.last_value, sys.last_traceback, limit, file)'.t	last_typesno last exceptionN(	R"Rt
ValueErrorRRR	RTt
last_valuetlast_traceback(R-R((s!/usr/lib64/python2.7/traceback.pyR
�scCs]|dkrCy
t�WqCtk
r?tj�djj}qCXntt||�|�dS(s�Print a stack trace from its invocation point.

    The optional 'f' argument can be used to specify an alternate
    stack frame at which to start. The optional 'limit' and 'file'
    arguments have the same meaning as for print_exception().
    iN(RtZeroDivisionErrorRRSR#tf_backRR(R/R-R((s!/usr/lib64/python2.7/traceback.pyRs

cCsV|dkrCy
t�WqCtk
r?tj�djj}qCXntt||��S(s5Shorthand for 'format_list(extract_stack(f, limit))'.iN(RRXRRSR#RYRR(R/R-((s!/usr/lib64/python2.7/traceback.pyRs

c	CsB|dkrCy
t�WqCtk
r?tj�djj}qCXn|dkrmttd�rmtj}qmng}d}x�|dk	r3|dks�||kr3|j}|j	}|j
}|j}tj
|�tj|||j�}|r�|j�}nd}|j||||f�|j}|d}q|W|j�|S(ssExtract the raw traceback from the current stack frame.

    The return value has the same format as for extract_tb().  The
    optional 'f' and 'limit' arguments have the same meaning as for
    print_stack().  Each item in the list is a quadruple (filename,
    line number, function name, text), and the entries are in order
    from oldest to newest stack frame.
    iR!iiN(RRXRRSR#RYR"R!tf_linenoR$R%R&R'R(R)R*RRtreverse(	R/R-RR.RR0RRR((s!/usr/lib64/python2.7/traceback.pyRs2	

'				
	
cCs|jS(sRCalculate correct line number of traceback given in tb.

    Obsolete in 2.3.
    (R
(R,((s!/usr/lib64/python2.7/traceback.pyR
;s(t__doc__R'RR:t__all__RRRRRRRR	RRR=RMRRR
RRRR
(((s!/usr/lib64/python2.7/traceback.pyt<module>s2			 	8			

		"�
zfc@sdZddlZddlZddlZddlZddlZddlZddlmZm	Z	m
Z
mZmZm
Z
mZmZmZmZmZmZmZmZeee
eeeef�ZyeWnek
r�iZnXd�Zdefd��YZeeefZd�Z d�Z!d	�Z"d
�Z#de$d�Z%de$d
�Z&e&Z'de(e$e$d�Z)dfd��YZ*de*fd��YZ+d�Z,e$e(d�Z-ej.dkr�ddl/Z/dfd��YZ0de*fd��YZ1ndS(s�Basic infrastructure for asynchronous socket service clients and servers.

There are only two ways to have a program on a single processor do "more
than one thing at a time".  Multi-threaded programming is the simplest and
most popular way to do it, but there is another very different technique,
that lets you have nearly all the advantages of multi-threading, without
actually using multiple threads. it's really only practical if your program
is largely I/O bound. If your program is CPU bound, then pre-emptive
scheduled threads are probably what you really need. Network servers are
rarely CPU-bound, however.

If your operating system supports the select() system call in its I/O
library (and nearly all do), then you can use it to juggle multiple
communication channels at once; doing other work while your I/O is taking
place in the "background."  Although this strategy can seem strange and
complex, especially at first, it is in many ways easier to understand and
control than multi-threaded programming. The module documented here solves
many of the difficult problems for you, making the task of building
sophisticated high-performance network servers and clients a snap.
i����N(tEALREADYtEINPROGRESStEWOULDBLOCKt
ECONNRESETtEINVALtENOTCONNt	ESHUTDOWNtEINTRtEISCONNtEBADFtECONNABORTEDtEPIPEtEAGAINt	errorcodecCsKytj|�SWn3tttfk
rF|tkr>t|Sd|SXdS(NsUnknown error %s(toststrerrort
ValueErrort
OverflowErrort	NameErrorR
(terr((s /usr/lib64/python2.7/asyncore.pyt	_strerrorDstExitNowcBseZRS((t__name__t
__module__(((s /usr/lib64/python2.7/asyncore.pyRLscCs9y|j�Wn$tk
r$�n|j�nXdS(N(thandle_read_eventt_reraised_exceptionsthandle_error(tobj((s /usr/lib64/python2.7/asyncore.pytreadQs
cCs9y|j�Wn$tk
r$�n|j�nXdS(N(thandle_write_eventRR(R((s /usr/lib64/python2.7/asyncore.pytwriteYs
cCs9y|j�Wn$tk
r$�n|j�nXdS(N(thandle_expt_eventRR(R((s /usr/lib64/python2.7/asyncore.pyt
_exceptionas
cCs�yz|tj@r|j�n|tj@r7|j�n|tj@rQ|j�n|tjtjBtj	B@ry|j
�nWnctjk
r�}|j
dtkr�|j�q�|j
�n$tk
r��n|j�nXdS(Ni(tselecttPOLLINRtPOLLOUTRtPOLLPRIRtPOLLHUPtPOLLERRtPOLLNVALthandle_closetsocketterrortargst
_DISCONNECTEDRR(Rtflagste((s /usr/lib64/python2.7/asyncore.pyt	readwriteis"








gc
Cs|dkrt}n|r�g}g}g}x�|j�D]v\}}|j�}|j�}|rt|j|�n|r�|jr�|j|�n|s�|r:|j|�q:q:Wg|ko�|ko�|knr�tj|�dSy%t	j	||||�\}}}Wn3t	j
k
rF}	|	jdtkr?�qGdSnXx9|D]1}|j
|�}|dkruqNnt|�qNWx9|D]1}|j
|�}|dkr�q�nt|�q�Wx<|D]1}|j
|�}|dkr�q�nt|�q�WndS(Ni(tNonet
socket_maptitemstreadabletwritabletappendt	acceptingttimetsleepR!R*R+RtgetRRR (
ttimeouttmaptrtwR.tfdRtis_rtis_wR((s /usr/lib64/python2.7/asyncore.pytpoll}sN	'
%


cCs}|dkrt}n|dk	r4t|d�}ntj�}|ryx�|j�D]�\}}d}|j�r�|tjtjBO}n|j	�r�|j
r�|tjO}n|rS|tjtj
BtjBO}|j||�qSqSWy|j|�}Wn5tjk
r0}|jdtkr'�ng}nXxE|D]:\}}|j|�}|dkreq8nt||�q8WndS(Ni�i(R0R1tintR!RAR2R3R"R$R4R6R#R&R%R'tregisterR*R+RR9R/(R:R;tpollsterR>RR-R<R((s /usr/lib64/python2.7/asyncore.pytpoll2�s4	
g>@cCs�|dkrt}n|r3ttd�r3t}nt}|dkrbxJ|r^|||�qHWn0x-|r�|dkr�|||�|d}qeWdS(NRAii(R0R1thasattrR!RERA(R:tuse_pollR;tcounttpoll_fun((s /usr/lib64/python2.7/asyncore.pytloop�s			
t
dispatchercBsaeZeZeZeZeZeZd Z	e
dg�Zd d d�Zd�Z
e
Zd d�Zd d�Zd�Zd d�Zd�Zd�Zd	�Zd
�Zd�Zd�Zd
�Zd�Zd�Zd�Zd�Zd�Zdd�Zd�Z d�Z!d�Z"d�Z#d�Z$d�Z%d�Z&d�Z'd�Z(d�Z)d�Z*RS(!twarningcCs�|dkrt|_n	||_d|_|r�|jd�|j||�t|_y|j�|_	Wq�t
jk
r�}|jdt
tfkr�t|_q�|j|��q�Xn	d|_
dS(Ni(R0R1t_mapt_filenotsetblockingt
set_sockettTruet	connectedtgetpeernametaddrR)R*R+RRtFalsetdel_channel(tselftsockR;R((s /usr/lib64/python2.7/asyncore.pyt__init__�s 		
	

cCs�|jjd|jjg}|jr?|jr?|jd�n|jrX|jd�n|jdk	r�y|jd|j�Wq�tk
r�|jt	|j��q�Xnddj
|�t|�fS(Nt.t	listeningRRs%s:%ds<%s at %#x>t (t	__class__RRR6RTR5RRR0t	TypeErrortreprtjointid(RWtstatus((s /usr/lib64/python2.7/asyncore.pyt__repr__	s	
cCs)|dkr|j}n|||j<dS(N(R0RMRN(RWR;((s /usr/lib64/python2.7/asyncore.pytadd_channelscCsD|j}|dkr!|j}n||kr7||=nd|_dS(N(RNR0RM(RWR;R>((s /usr/lib64/python2.7/asyncore.pyRVs	
cCs?||f|_tj||�}|jd�|j|�dS(Ni(tfamily_and_typeR)RORP(RWtfamilyttypeRX((s /usr/lib64/python2.7/asyncore.pyt
create_socket's
cCs)||_|j�|_|j|�dS(N(R)tfilenoRNRd(RWRXR;((s /usr/lib64/python2.7/asyncore.pyRP-s	cCsTy9|jjtjtj|jjtjtj�dB�Wntjk
rOnXdS(Ni(R)t
setsockoptt
SOL_SOCKETtSO_REUSEADDRt
getsockoptR*(RW((s /usr/lib64/python2.7/asyncore.pytset_reuse_addr3s	cCstS(N(RQ(RW((s /usr/lib64/python2.7/asyncore.pyR3DscCstS(N(RQ(RW((s /usr/lib64/python2.7/asyncore.pyR4GscCs=t|_tjdkr-|dkr-d}n|jj|�S(Ntnti(RQR6RtnameR)tlisten(RWtnum((s /usr/lib64/python2.7/asyncore.pyRqNs		cCs||_|jj|�S(N(RTR)tbind(RWRT((s /usr/lib64/python2.7/asyncore.pyRsTs	cCs�t|_t|_|jj|�}|tttfksT|t	krat
jdkra||_dS|dt
fkr�||_|j�ntj|t|��dS(NRotcei(RoRt(RURRRQt
connectingR)t
connect_exRRRRRRpRTRthandle_connect_eventR*R
(RWtaddressR((s /usr/lib64/python2.7/asyncore.pytconnectXs				
cCsty|jj�\}}WnJtk
r-dStjk
re}|jdtttfkr_dS�nX||fSdS(Ni(	R)tacceptR^R0R*R+RR
R(RWtconnRTtwhy((s /usr/lib64/python2.7/asyncore.pyRzfs
cCsry|jj|�}|SWnQtjk
rm}|jdtkrFdS|jdtkrg|j�dS�nXdS(Ni(R)tsendR*R+RR,R((RWtdatatresultR|((s /usr/lib64/python2.7/asyncore.pyR}ts
cCsoy.|jj|�}|s)|j�dS|SWn:tjk
rj}|jdtkrd|j�dS�nXdS(Nti(R)trecvR(R*R+R,(RWtbuffer_sizeR~R|((s /usr/lib64/python2.7/asyncore.pyR��s

cCsrt|_t|_t|_|j�y|jj�Wn5tjk
rm}|jdt	t
fkrn�qnnXdS(Ni(RURRR6RuRVR)tcloseR*R+RR	(RWR|((s /usr/lib64/python2.7/asyncore.pyR��s			
cCs�yt|j|�}Wn-tk
rEtd|jj|f��n9Xdi|jjd6|d6}tj|tdd�|SdS(Ns!%s instance has no attribute '%s'sB%(me)s.%(attr)s is deprecated. Use %(me)s.socket.%(attr)s instead.tmetattrt
stackleveli(tgetattrR)tAttributeErrorR]RtwarningstwarntDeprecationWarning(RWR�tretattrtmsg((s /usr/lib64/python2.7/asyncore.pyt__getattr__�s
cCstjjdt|��dS(Nslog: %s
(tsyststderrRtstr(RWtmessage((s /usr/lib64/python2.7/asyncore.pytlog�stinfocCs%||jkr!d||fGHndS(Ns%s: %s(tignore_log_types(RWR�Rg((s /usr/lib64/python2.7/asyncore.pytlog_info�scCsP|jr|j�n6|jsB|jr5|j�n|j�n
|j�dS(N(R6t
handle_acceptRRRuRwthandle_read(RW((s /usr/lib64/python2.7/asyncore.pyR�s	
		

cCsb|jjtjtj�}|dkrBtj|t|���n|j�t|_t	|_
dS(Ni(R)RmRktSO_ERRORR*Rthandle_connectRQRRRURu(RWR((s /usr/lib64/python2.7/asyncore.pyRw�s
	cCs=|jr
dS|js/|jr/|j�q/n|j�dS(N(R6RRRuRwthandle_write(RW((s /usr/lib64/python2.7/asyncore.pyR�s			cCsB|jjtjtj�}|dkr4|j�n
|j�dS(Ni(R)RmRkR�R(thandle_expt(RWR((s /usr/lib64/python2.7/asyncore.pyR�s
cCsmt�\}}}}yt|�}Wndt|�}nX|jd||||fd�|j�dS(Ns)<__repr__(self) failed for object at %0x>s:uncaptured python exception, closing channel %s (%s:%s %s)R*(tcompact_tracebackR_RaR�R((RWtniltttvttbinfot	self_repr((s /usr/lib64/python2.7/asyncore.pyR�scCs|jdd�dS(Ns!unhandled incoming priority eventRL(R�(RW((s /usr/lib64/python2.7/asyncore.pyR��scCs|jdd�dS(Nsunhandled read eventRL(R�(RW((s /usr/lib64/python2.7/asyncore.pyR��scCs|jdd�dS(Nsunhandled write eventRL(R�(RW((s /usr/lib64/python2.7/asyncore.pyR��scCs|jdd�dS(Nsunhandled connect eventRL(R�(RW((s /usr/lib64/python2.7/asyncore.pyR�scCs|jdd�dS(Nsunhandled accept eventRL(R�(RW((s /usr/lib64/python2.7/asyncore.pyR�scCs|jdd�|j�dS(Nsunhandled close eventRL(R�R�(RW((s /usr/lib64/python2.7/asyncore.pyR(sN(+RRRUtdebugRRR6RutclosingR0RTt	frozensetR�RYRct__str__RdRVRhRPRnR3R4RqRsRyRzR}R�R�R�R�R�RRwRRRR�R�R�R�R�R((((s /usr/lib64/python2.7/asyncore.pyRK�sL 	
										
		
												tdispatcher_with_sendcBs;eZddd�Zd�Zd�Zd�Zd�ZRS(cCs tj|||�d|_dS(NR�(RKRYt
out_buffer(RWRXR;((s /usr/lib64/python2.7/asyncore.pyRYscCs3d}tj||jd �}|j||_dS(Nii(RKR}R�(RWtnum_sent((s /usr/lib64/python2.7/asyncore.pyt
initiate_sendscCs|j�dS(N(R�(RW((s /usr/lib64/python2.7/asyncore.pyR�scCs|jpt|j�S(N(RRtlenR�(RW((s /usr/lib64/python2.7/asyncore.pyR4scCsA|jr#|jdt|��n|j||_|j�dS(Ns
sending %s(R�R�R_R�R�(RWR~((s /usr/lib64/python2.7/asyncore.pyR}"s	N(RRR0RYR�R�R4R}(((s /usr/lib64/python2.7/asyncore.pyR�s
			c	Cs�tj�\}}}g}|s0td��nxD|rv|j|jjj|jjjt|j	�f�|j
}q3W~|d\}}}djg|D]}d|^q��}|||f|||fS(Nstraceback does not existi����R\s
[%s|%s|%s](R�texc_infotAssertionErrorR5ttb_frametf_codetco_filenametco_nameR�t	tb_linenottb_nextR`(	R�R�ttbR�tfiletfunctiontlinetxR�((s /usr/lib64/python2.7/asyncore.pyR�,s	
&cCs�|dkrt}nx|j�D]q}y|j�Wq"tk
rm}|jdtkr^q�|s��q�q"tk
r��q"|s��q�q"Xq"W|j�dS(Ni(	R0R1tvaluesR�tOSErrorR+R	Rtclear(R;t
ignore_allR�((s /usr/lib64/python2.7/asyncore.pyt	close_all@s 		
tposixtfile_wrappercBsMeZd�Zd�Zd�Zdd�ZeZeZd�Z	d�Z
RS(cCstj|�|_dS(N(RtdupR>(RWR>((s /usr/lib64/python2.7/asyncore.pyRYgscGstj|j|�S(N(RRR>(RWR+((s /usr/lib64/python2.7/asyncore.pyR�jscGstj|j|�S(N(RRR>(RWR+((s /usr/lib64/python2.7/asyncore.pyR}mscCs9|tjkr)|tjkr)|r)dStd��dS(Nis-Only asyncore specific behaviour implemented.(R)RkR�tNotImplementedError(RWtleveltoptnametbuflen((s /usr/lib64/python2.7/asyncore.pyRmps
cCs6|jdkrdS|j}d|_tj|�dS(Nii����(R>RR�(RWR>((s /usr/lib64/python2.7/asyncore.pyR�{s
		cCs|jS(N(R>(RW((s /usr/lib64/python2.7/asyncore.pyRi�sN(RRRYR�R}R0RmRRR�Ri(((s /usr/lib64/python2.7/asyncore.pyR�bs				tfile_dispatchercBseZdd�Zd�ZRS(cCs�tj|d|�t|_y|j�}Wntk
r?nX|j|�tj|tj	d�}|t
jB}tj|tj|�dS(Ni(
RKRYR0RQRRRiR�tset_filetfcntltF_GETFLRt
O_NONBLOCKtF_SETFL(RWR>R;R-((s /usr/lib64/python2.7/asyncore.pyRY�s	


cCs/t|�|_|jj�|_|j�dS(N(R�R)RiRNRd(RWR>((s /usr/lib64/python2.7/asyncore.pyR��sN(RRR0RYR�(((s /usr/lib64/python2.7/asyncore.pyR��s
(2t__doc__R!R)R�R7R�RterrnoRRRRRRRRRR	R
RRR
R�R,R1RRt	ExceptionRtKeyboardInterruptt
SystemExitRRRR R/R0RAREtpoll3RURJRKR�R�R�RpR�R�R�(((s /usr/lib64/python2.7/asyncore.pyt<module>/sB^

					-!�3	#�
zfc$@spdZddlZddlZddlZddlZddlZddlTddlmZddddd	d
ddd
ddddddddddddddddddd d!d"d#d$d%d&d'd(g$Zd)Zd*Z	d)Z
d+Zd,Zd-Z
d.Zd/ejkrd0Znd1ejkr(d-Z
nd2Zd3�Zd4�Zd5�Zd6�Zd7�Zd8�Zd9�Zejje_d:�Zd;�Zd<�ZeZd=�Zd>�Zd?�Z d@�Z!dA�Z"yddBl#m$Z$Wne%k
r�dC�Z&n
XdD�Z&e&Z'e(edE�o,ej)�dFdGkZ*dH�Z+edI�Z,yddJl#m-Z.Wne%k
rknXdS(Ks�Common pathname manipulations, WindowsNT/95 version.

Instead of importing this module directly, import os and refer to this
module as os.path.
i����N(t*(t_unicodetnormcasetisabstjoint
splitdrivetsplittsplitexttbasenametdirnametcommonprefixtgetsizetgetmtimetgetatimetgetctimetislinktexiststlexiststisdirtisfiletismounttwalkt
expandusert
expandvarstnormpathtabspathtsplitunctcurdirtpardirtseptpathseptdefpathtaltseptextseptdevnulltrealpathtsupports_unicode_filenamestrelpatht.s..s\t;t/s.;C:\bintces\Windowstos2tnulcCs|jdd�j�S(saNormalize case of pathname.

    Makes all characters lowercase and all slashes into backslashes.R(s\(treplacetlower(ts((s/usr/lib64/python2.7/ntpath.pyR+scCs*t|�d}|dko)|d dkS(sTest whether a path is absoluteits/\(R(R.((s/usr/lib64/python2.7/ntpath.pyR8scGst|�\}}x�|D]�}t|�\}}|ri|ddkri|sT|r]|}n|}qnE|r�||kr�|j�|j�kr�|}|}qn|}n|r�|ddkr�|d}n||}qW|r|ddkr|r|ddkr|t|S||S(s>Join two or more pathname components, inserting "\" as needed.is\/i����s\t:(RR-R(tpathtpathstresult_drivetresult_pathtptp_drivetp_path((s/usr/lib64/python2.7/ntpath.pyR?s*

		
cCst|�dkr�|jtt�}|dd!tdkr�|dd!tkr�|jtd�}|dkrvd|fS|jt|d�}||dkr�d|fS|dkr�t|�}n|| ||fS|ddkr�|d |dfSnd|fS(s�Split a pathname into drive/UNC sharepoint and relative path specifiers.
    Returns a 2-tuple (drive_or_unc, path); either part may be empty.

    If you assign
        result = splitdrive(p)
    It is always true that:
        result[0] + result[1] == p

    If the path contained a drive letter, drive_or_unc will contain everything
    up to and including the colon.  e.g. splitdrive("c:/dir") returns ("c:", "/dir")

    If the path contained a UNC path, the drive_or_unc will contain the host name
    and share up to but not including the fourth directory separator character.
    e.g. splitdrive("//host/computer/dir") returns ("//host/computer", "/dir")

    Paths cannot contain both a drive letter and a UNC path.

    iiiii����R/R0(tlenR,R Rtfind(R5tnormptindextindex2((s/usr/lib64/python2.7/ntpath.pyR`s*

cCs�|dd!dkrd|fS|dd!}|dksB|dkr�|jdd	�}|jd	d�}|dkr|d|fS|jd	|d�}||dkr�d|fS|d
kr�t|�}n|| ||fSd|fS(s?Split a pathname into UNC mount point and relative path specifiers.

    Return a 2-tuple (unc, rest); either part may be empty.
    If unc is not empty, it has the form '//host/mount' (or similar
    using backslashes).  unc+rest is always the input path.
    Paths containing drive letters never have a UNC part.
    iiR0R/is//s\\s\R(i����(R,R9R8(R5tfirstTwoR:R;R<((s/usr/lib64/python2.7/ntpath.pyR�s



cCs�t|�\}}t|�}x(|rH||ddkrH|d}q!W|| ||}}|}x$|r�|ddkr�|d }qgW|p�|}|||fS(s~Split a pathname.

    Return tuple (head, tail) where tail is everything after the final slash.
    Either part may be empty.is/\i����(RR8(R5tdtitheadttailthead2((s/usr/lib64/python2.7/ntpath.pyR�scCstj|ttt�S(N(tgenericpatht	_splitextRR R!(R5((s/usr/lib64/python2.7/ntpath.pyR�scCst|�dS(s)Returns the final component of a pathnamei(R(R5((s/usr/lib64/python2.7/ntpath.pyR�scCst|�dS(s-Returns the directory component of a pathnamei(R(R5((s/usr/lib64/python2.7/ntpath.pyR	�scCstS(sNTest for symbolic link.
    On WindowsNT/95 and OS/2 always returns false
    (tFalse(R1((s/usr/lib64/python2.7/ntpath.pyR�scCsRt|�\}}|r"|dkSt|�d}t|�dkoQ|ddkS(s?Test whether a path is a mount point (defined as root of drive)R/R(s\iis/\(R/R(s\(RRR8(R1tunctrestR5((s/usr/lib64/python2.7/ntpath.pyR�s

cCs�tjddd�ytj|�}Wntjk
r=dSX||||�x<|D]4}t||�}t|�rUt|||�qUqUWdS(sIDirectory tree walk with callback function.

    For each directory in the directory tree rooted at top (including top
    itself, but excluding '.' and '..'), call func(arg, dirname, fnames).
    dirname is the name of the directory, and fnames a list of the names of
    the files and subdirectories in dirname (excluding '.' and '..').  func
    may modify the fnames list in-place (e.g. via del or slice assignment),
    and walk will only recurse into the subdirectories whose names remain in
    fnames; this can be used to implement a filter, or to impose a specific
    order of visiting.  No semantics are defined for, or required of, arg,
    beyond that arg is always passed to func.  It can be used, e.g., to pass
    a filename pattern, or a mutable object designed to accumulate
    statistics.  Passing None for arg is common.s4In 3.x, os.path.walk is removed in favor of os.walk.t
stackleveliN(twarningstwarnpy3ktostlistdirterrorRRR(ttoptfunctargtnamestname((s/usr/lib64/python2.7/ntpath.pyR�s
cCs|d dkr|Sdt|�}}x*||krS||dkrS|d}q*Wdtjkrstjd}nsdtjkr�tjd}nTdtjkr�|Sytjd}Wntk
r�d}nXt|tjd�}|dkrtt|�|d|!�}n|||S(	sLExpand ~ and ~user constructs.

    If user or $HOME is unknown, do nothing.it~s/\tHOMEtUSERPROFILEtHOMEPATHt	HOMEDRIVER/(R8RKtenvirontKeyErrorRR	(R1R?tntuserhometdrive((s/usr/lib64/python2.7/ntpath.pyRs&

c	s�d|krd|kr|Sddl}|j|jd}t|t�ritj���fd�}n	d�}d}d	}t|�}x:||kr�||}|d
kr!||d}t|�}y)|jd
�}|d
||d }Wq�t	k
r|||}|d}q�Xn�|dkr�||d|d!dkr_||}|d}q�||d}t|�}y|jd�}Wn)t	k
r�|d|}|d}q�X|| }y|||�}Wq�t
k
r�|d|d}q�Xn�|dkr�||d|d!dkr=||}|d}q�||d|d!d
kr�||d}t|�}yW|jd�}|| }y|||�}Wn#t
k
r�|d|d}nXWq�t	k
r�|d|}|d}q�Xq�d}|d}|||d!}xA|dkr\||kr\||}|d}|||d!}qWy|||�}Wnt
k
r�|d|}nX|dkr�|d}q�n
||}|d}q�W|S(sfExpand shell variables of the forms $var, ${var} and %var%.

    Unknown variables are left unchanged.t$t%i����Ns_-cstj|j��j��S(N(RKRXtencodetdecode(tvar(tencoding(s/usr/lib64/python2.7/ntpath.pytgetenvQscSstj|S(N(RKRX(Ra((s/usr/lib64/python2.7/ntpath.pyRcTsR/is'iit{t}s${(tstringt
ascii_letterstdigitst
isinstanceRtsystgetfilesystemencodingR8R;t
ValueErrorRY(	R1RftvarcharsRctresR;tpathlentcRa((Rbs/usr/lib64/python2.7/ntpath.pyRGs�	
















cCs�t|t�rdnd
\}}|jd�r4|S|jdd�}t|�\}}|dkr�xV|d	 dkr�||}|d	}qgWn+|jd�r�||}|jd�}n|jd�}d
}x�|t|�kr�||dkr||=q�||dkr�|d
krP||d	dkrP||d	|d	5|d	8}q�|d
kru|jd�ru||=q�|d	7}q�|d	7}q�W|r�|r�|j	|�n||j
|�S(s0Normalize path, eliminating double slashes, etc.u\u.s\R&s\\.\s\\?\R(R/iis..(u\u.(s\R&(s\\.\s\\?\(R&R/(RiRt
startswithR,RtlstripRR8tendswithtappendR(R1t	backslashtdottprefixtcompsR?((s/usr/lib64/python2.7/ntpath.pyR�s8!



 


(t_getfullpathnamecCsRt|�sHt|t�r*tj�}ntj�}t||�}nt|�S(s&Return the absolute version of a path.(RRiRRKtgetcwdutgetcwdRR(R1tcwd((s/usr/lib64/python2.7/ntpath.pyR�scCsa|r-yt|�}WqWtk
r)qWXn*t|t�rKtj�}ntj�}t|�S(s&Return the absolute version of a path.(RytWindowsErrorRiRRKRzR{R(R1((s/usr/lib64/python2.7/ntpath.pyR�s
tgetwindowsversioniicCsztt|��}t|�\}}t|�}|sKt|�\}}n||g|jt�D]}|ra|^qafS(N(RRRtboolRRR(R1tabsRwRGtis_unctx((s/usr/lib64/python2.7/ntpath.pyt_abspath_split�scCs1|std��nt|�\}}}t|�\}}}||Arbtd||f��n|j�|j�kr�|r�td||f��q�td||f��nd}xCt||�D]2\}	}
|	j�|
j�kr�Pn|d7}q�Wtgt|�|||}|s'tSt|�S(s#Return a relative version of a pathsno path specifieds,Cannot mix UNC and non-UNC paths (%s and %s)s,path is on UNC root %s, start on UNC root %ss&path is on drive %s, start on drive %sii(RlR�R-tzipRR8RR(R1tstarttstart_is_unctstart_prefixt
start_listtpath_is_unctpath_prefixt	path_listR?te1te2trel_list((s/usr/lib64/python2.7/ntpath.pyR%�s,
(t_isdir(/t__doc__RKRjtstatRCRIRt__all__RRR!RRR Rtbuiltin_module_namesR"RRRRRRRRDRR	RRRRRRRRtntRytImportErrorRR#thasattrR~R$R�R%R�R(((s/usr/lib64/python2.7/ntpath.pyt<module>sp
			
		!	*	$							$	*	X	5

		
"""An object-oriented interface to .netrc files."""

# Module and documentation by Eric S. Raymond, 21 Dec 1998

import os, stat, shlex
if os.name == 'posix':
    import pwd

__all__ = ["netrc", "NetrcParseError"]


class NetrcParseError(Exception):
    """Exception raised on syntax errors in the .netrc file."""
    def __init__(self, msg, filename=None, lineno=None):
        self.filename = filename
        self.lineno = lineno
        self.msg = msg
        Exception.__init__(self, msg)

    def __str__(self):
        return "%s (%s, line %s)" % (self.msg, self.filename, self.lineno)


class netrc:
    def __init__(self, file=None):
        default_netrc = file is None
        if file is None:
            try:
                file = os.path.join(os.environ['HOME'], ".netrc")
            except KeyError:
                raise IOError("Could not find .netrc: $HOME is not set")
        self.hosts = {}
        self.macros = {}
        with open(file) as fp:
            self._parse(file, fp, default_netrc)

    def _parse(self, file, fp, default_netrc):
        lexer = shlex.shlex(fp)
        lexer.wordchars += r"""!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~"""
        lexer.commenters = lexer.commenters.replace('#', '')
        while 1:
            # Look for a machine, default, or macdef top-level keyword
            toplevel = tt = lexer.get_token()
            if not tt:
                break
            elif tt[0] == '#':
                # seek to beginning of comment, in case reading the token put
                # us on a new line, and then skip the rest of the line.
                pos = len(tt) + 1
                lexer.instream.seek(-pos, 1)
                lexer.instream.readline()
                continue
            elif tt == 'machine':
                entryname = lexer.get_token()
            elif tt == 'default':
                entryname = 'default'
            elif tt == 'macdef':                # Just skip to end of macdefs
                entryname = lexer.get_token()
                self.macros[entryname] = []
                lexer.whitespace = ' \t'
                while 1:
                    line = lexer.instream.readline()
                    if not line or line == '\012':
                        lexer.whitespace = ' \t\r\n'
                        break
                    self.macros[entryname].append(line)
                continue
            else:
                raise NetrcParseError(
                    "bad toplevel token %r" % tt, file, lexer.lineno)

            # We're looking at start of an entry for a named machine or default.
            login = ''
            account = password = None
            self.hosts[entryname] = {}
            while 1:
                tt = lexer.get_token()
                if (tt.startswith('#') or
                    tt in {'', 'machine', 'default', 'macdef'}):
                    if password:
                        self.hosts[entryname] = (login, account, password)
                        lexer.push_token(tt)
                        break
                    else:
                        raise NetrcParseError(
                            "malformed %s entry %s terminated by %s"
                            % (toplevel, entryname, repr(tt)),
                            file, lexer.lineno)
                elif tt == 'login' or tt == 'user':
                    login = lexer.get_token()
                elif tt == 'account':
                    account = lexer.get_token()
                elif tt == 'password':
                    if os.name == 'posix' and default_netrc:
                        prop = os.fstat(fp.fileno())
                        if prop.st_uid != os.getuid():
                            try:
                                fowner = pwd.getpwuid(prop.st_uid)[0]
                            except KeyError:
                                fowner = 'uid %s' % prop.st_uid
                            try:
                                user = pwd.getpwuid(os.getuid())[0]
                            except KeyError:
                                user = 'uid %s' % os.getuid()
                            raise NetrcParseError(
                                ("~/.netrc file owner (%s) does not match"
                                 " current user (%s)") % (fowner, user),
                                file, lexer.lineno)
                        if (prop.st_mode & (stat.S_IRWXG | stat.S_IRWXO)):
                            raise NetrcParseError(
                               "~/.netrc access too permissive: access"
                               " permissions must restrict access to only"
                               " the owner", file, lexer.lineno)
                    password = lexer.get_token()
                else:
                    raise NetrcParseError("bad follower token %r" % tt,
                                          file, lexer.lineno)

    def authenticators(self, host):
        """Return a (user, account, password) tuple for given host."""
        if host in self.hosts:
            return self.hosts[host]
        elif 'default' in self.hosts:
            return self.hosts['default']
        else:
            return None

    def __repr__(self):
        """Dump the class data in the format of a .netrc file."""
        rep = ""
        for host in self.hosts.keys():
            attrs = self.hosts[host]
            rep += "machine {host}\n\tlogin {attrs[0]}\n".format(host=host, attrs=attrs)
            if attrs[1]:
                rep += "\taccount {attrs[1]}\n".format(attrs=attrs)
            rep += "\tpassword {attrs[2]}\n".format(attrs=attrs)
        for macro in self.macros.keys():
            rep += "macdef {macro}\n".format(macro=macro)
            for line in self.macros[macro]:
                rep += line
            rep += "\n"
        return rep

if __name__ == '__main__':
    print netrc()
�
zfc@sadZddlZddlZdddgZdefd��YZdZd�Zd	�Zd
�Z	d�Z
d�Zd
Zd�Z
d�Zd�Zd�Zd�Zd�Zd�ZddlmZdfd��YZdfd��YZed�ZeZedkr]ddlZejdr<ejjd�nejdZeed�Z z�dGeGHd Ge j!�GHd!Ge j"�GHd"Ge j#�GHd#Ge j$�GHd$Ge j%�GHd%Ge j&�GHejd&rKejd&Z'd'Ge'GHee'd(�Z(zEe(j)e j*��x+e j+d)�Z,e,r#Pne(j-e,�q	WWde(j.�Xd*GHnWde j.�XndS(+sDStuff to parse AIFF-C and AIFF files.

Unless explicitly stated otherwise, the description below is true
both for AIFF-C files and AIFF files.

An AIFF-C file has the following structure.

  +-----------------+
  | FORM            |
  +-----------------+
  | <size>          |
  +----+------------+
  |    | AIFC       |
  |    +------------+
  |    | <chunks>   |
  |    |    .       |
  |    |    .       |
  |    |    .       |
  +----+------------+

An AIFF file has the string "AIFF" instead of "AIFC".

A chunk consists of an identifier (4 bytes) followed by a size (4 bytes,
big endian order), followed by the data.  The size field does not include
the size of the 8 byte header.

The following chunk types are recognized.

  FVER
      <version number of AIFF-C defining document> (AIFF-C only).
  MARK
      <# of markers> (2 bytes)
      list of markers:
          <marker ID> (2 bytes, must be > 0)
          <position> (4 bytes)
          <marker name> ("pstring")
  COMM
      <# of channels> (2 bytes)
      <# of sound frames> (4 bytes)
      <size of the samples> (2 bytes)
      <sampling frequency> (10 bytes, IEEE 80-bit extended
          floating point)
      in AIFF-C files only:
      <compression type> (4 bytes)
      <human-readable version of compression type> ("pstring")
  SSND
      <offset> (4 bytes, not used by this program)
      <blocksize> (4 bytes, not used by this program)
      <sound data>

A pstring consists of 1 byte length, a string of characters, and 0 or 1
byte pad to make the total length even.

Usage.

Reading AIFF files:
  f = aifc.open(file, 'r')
where file is either the name of a file or an open file pointer.
The open file pointer must have methods read(), seek(), and close().
In some types of audio files, if the setpos() method is not used,
the seek() method is not necessary.

This returns an instance of a class with the following public methods:
  getnchannels()  -- returns number of audio channels (1 for
             mono, 2 for stereo)
  getsampwidth()  -- returns sample width in bytes
  getframerate()  -- returns sampling frequency
  getnframes()    -- returns number of audio frames
  getcomptype()   -- returns compression type ('NONE' for AIFF files)
  getcompname()   -- returns human-readable version of
             compression type ('not compressed' for AIFF files)
  getparams() -- returns a tuple consisting of all of the
             above in the above order
  getmarkers()    -- get the list of marks in the audio file or None
             if there are no marks
  getmark(id) -- get mark with the specified id (raises an error
             if the mark does not exist)
  readframes(n)   -- returns at most n frames of audio
  rewind()    -- rewind to the beginning of the audio stream
  setpos(pos) -- seek to the specified position
  tell()      -- return the current position
  close()     -- close the instance (make it unusable)
The position returned by tell(), the position given to setpos() and
the position of marks are all compatible and have nothing to do with
the actual position in the file.
The close() method is called automatically when the class instance
is destroyed.

Writing AIFF files:
  f = aifc.open(file, 'w')
where file is either the name of a file or an open file pointer.
The open file pointer must have methods write(), tell(), seek(), and
close().

This returns an instance of a class with the following public methods:
  aiff()      -- create an AIFF file (AIFF-C default)
  aifc()      -- create an AIFF-C file
  setnchannels(n) -- set the number of channels
  setsampwidth(n) -- set the sample width
  setframerate(n) -- set the frame rate
  setnframes(n)   -- set the number of frames
  setcomptype(type, name)
          -- set the compression type and the
             human-readable compression type
  setparams(tuple)
          -- set all parameters at once
  setmark(id, pos, name)
          -- add specified mark to the list of marks
  tell()      -- return current position in output file (useful
             in combination with setmark())
  writeframesraw(data)
          -- write audio frames without pathing up the
             file header
  writeframes(data)
          -- write audio frames and patch up the file header
  close()     -- patch up the file header and close the
             output file
You should set the parameters before the first writeframesraw or
writeframes.  The total number of frames does not need to be set,
but when it is set to the correct value, the header does not have to
be patched up.
It is best to first set all parameters, perhaps possibly the
compression type, and then write audio frames using writeframesraw.
When all frames have been written, either call writeframes('') or
close() to patch up the sizes in the header.
Marks can be added anytime.  If there are any marks, you must call
close() after all frames have been written.
The close() method is called automatically when the class instance
is destroyed.

When a file is opened with the extension '.aiff', an AIFF file is
written, otherwise an AIFF-C file is written.  This default can be
changed by calling aiff() or aifc() before the first writeframes or
writeframesraw.
i����NtErrortopentopenfpcBseZRS((t__name__t
__module__(((s/usr/lib64/python2.7/aifc.pyR�sl@QEcCsBy!tjd|jd��dSWntjk
r=t�nXdS(Ns>lii(tstructtunpacktreadterrortEOFError(tfile((s/usr/lib64/python2.7/aifc.pyt
_read_long�s!cCsBy!tjd|jd��dSWntjk
r=t�nXdS(Ns>Lii(RRRRR	(R
((s/usr/lib64/python2.7/aifc.pyt_read_ulong�s!cCsBy!tjd|jd��dSWntjk
r=t�nXdS(Ns>hii(RRRRR	(R
((s/usr/lib64/python2.7/aifc.pyt_read_short�s!cCsBy!tjd|jd��dSWntjk
r=t�nXdS(Ns>Hii(RRRRR	(R
((s/usr/lib64/python2.7/aifc.pyt_read_ushort�s!cCs_t|jd��}|dkr*d}n|j|�}|d@dkr[|jd�}n|S(Niit(tordR(R
tlengthtdatatdummy((s/usr/lib64/python2.7/aifc.pyt_read_string�s	g�����cCs�t|�}d}|dkr1d}|d}nt|�}t|�}||kok|kokdknryd}n>|dkr�t}n)|d}|d|td	|d
�}||S(Niii����i�gi�i�?lg@i?(R
Rt	_HUGE_VALtpow(tftexpontsignthimanttlomant((s/usr/lib64/python2.7/aifc.pyt_read_float�s
'		
cCs|jtjd|��dS(Ns>h(twriteRtpack(Rtx((s/usr/lib64/python2.7/aifc.pyt_write_short�scCs|jtjd|��dS(Ns>H(RRR(RR((s/usr/lib64/python2.7/aifc.pyt
_write_ushort�scCs|jtjd|��dS(Ns>l(RRR(RR((s/usr/lib64/python2.7/aifc.pyt_write_long�scCs|jtjd|��dS(Ns>L(RRR(RR((s/usr/lib64/python2.7/aifc.pyt_write_ulong�scCs}t|�dkr!td��n|jtjdt|���|j|�t|�d@dkry|jtd��ndS(Ni�s%string exceeds maximum pstring lengthtBii(tlent
ValueErrorRRRtchr(Rts((s/usr/lib64/python2.7/aifc.pyt
_write_string�s
c	Cshddl}|dkr+d}|d}nd}|dkrRd}d}d}n�|j|�\}}|dks�|dks�||kr�|dB}d}d}n�|d}|dkr�|j||�}d}n||B}|j|d�}|j|�}t|�}|j||d�}|j|�}t|�}t||�t||�t||�dS(	Ni����ii�i@ii�i�?i (tmathtfrexptldexptfloortlongR!R#(	RRR*RRRRtfmanttfsmant((s/usr/lib64/python2.7/aifc.pyt_write_float�s8
	$
	
	


(tChunkt	Aifc_readcBs�eZdZd�Zd�Zd�Zd�Zd�Zd�Z	d�Z
d�Zd�Zd	�Z
d
�Zd�Zd�Zd
�Zd�Zd�Zd�Zd�Zd�Zd�Zd�Zd�ZRS(cCs^d|_d|_d|_g|_d|_||_t|�}|j�dkr`t	d�n|j
d�}|dkr�d|_n!|dkr�d|_n	t	d�d|_d|_
x�d|_yt|j�}Wntk
r�PnX|j�}|d	kr|j|�d|_nj|d
krO||_
|j
d�}d|_n:|dkrmt|�|_n|d
kr�|j|�n|j�q�W|js�|j
r�t	d�n|jrZ|jrZddl}|jd|j|jd|j|jg}|jdkr|j|d<n(|jdkr>|j|d<n	t	d�|jj|�ndS(NitFORMs file does not start with FORM iditAIFFtAIFCisnot an AIFF or AIFF-C filetCOMMtSSNDitFVERtMARKs$COMM chunk and/or SSND chunk missingi����is$cannot compress more than 2 channels(t_versiontNonet_decompt_convertt_markerst	_soundpost_fileR2tgetnameRRt_aifct_comm_chunk_readt_ssnd_chunkt_ssnd_seek_neededR	t_read_comm_chunkRt	_readmarktskiptcltORIGINAL_FORMATtBITS_PER_COMPONENTt
_sampwidtht
FRAME_RATEt
_frameratet
_nchannelstMONOtSTEREO_INTERLEAVEDt	SetParams(tselfR
tchunktformdatat	chunknameRRJtparams((s/usr/lib64/python2.7/aifc.pytinitfp%sb										

			cCs]t|t�rLtj|d�}y|j|�WqY|j��qYXn
|j|�dS(Ntrb(t
isinstancet
basestringt__builtin__RRYtclose(RTR((s/usr/lib64/python2.7/aifc.pyt__init__Zs

cCs|jS(N(RA(RT((s/usr/lib64/python2.7/aifc.pytgetfpiscCsd|_d|_dS(Nii(RFR@(RT((s/usr/lib64/python2.7/aifc.pytrewindls	cCs>|j}z |r(d|_|j�nWd|jj�XdS(N(R=R<tCloseDecompressorRAR^(RTtdecomp((s/usr/lib64/python2.7/aifc.pyR^ps		cCs|jS(N(R@(RT((s/usr/lib64/python2.7/aifc.pyttellyscCs|jS(N(RP(RT((s/usr/lib64/python2.7/aifc.pytgetnchannels|scCs|jS(N(t_nframes(RT((s/usr/lib64/python2.7/aifc.pyt
getnframesscCs|jS(N(RM(RT((s/usr/lib64/python2.7/aifc.pytgetsampwidth�scCs|jS(N(RO(RT((s/usr/lib64/python2.7/aifc.pytgetframerate�scCs|jS(N(t	_comptype(RT((s/usr/lib64/python2.7/aifc.pytgetcomptype�scCs|jS(N(t	_compname(RT((s/usr/lib64/python2.7/aifc.pytgetcompname�scCs:|j�|j�|j�|j�|j�|j�fS(N(ReRhRiRgRkRm(RT((s/usr/lib64/python2.7/aifc.pyt	getparams�scCs t|j�dkrdS|jS(Ni(R%R?R<(RT((s/usr/lib64/python2.7/aifc.pyt
getmarkers�scCs<x%|jD]}||dkr
|Sq
Wtd|f�dS(Nismarker %r does not exist(R?R(RTtidtmarker((s/usr/lib64/python2.7/aifc.pytgetmark�scCs=|dks||jkr'td�n||_d|_dS(Nisposition not in rangei(RfRR@RF(RTtpos((s/usr/lib64/python2.7/aifc.pytsetpos�s	cCs�|jrd|jjd�|jjd�}|j|j}|rX|jj|d�nd|_n|dkrtdS|jj||j�}|jr�|r�|j|�}n|jt|�|j|j	|_|S(NiiR(
RFREtseekRR@t
_framesizeR>R%RPRM(RTtnframesRRsR((s/usr/lib64/python2.7/aifc.pyt
readframes�s	$cCsNddl}|jj|jt|�d�}|jjt|�|j|�S(Ni����i(RJR=tSetParamtFRAME_BUFFER_SIZER%t
DecompressRP(RTRRJR((s/usr/lib64/python2.7/aifc.pyt_decomp_data�s
cCsddl}|j|d�S(Ni����i(taudiooptulaw2lin(RTRR}((s/usr/lib64/python2.7/aifc.pyt	_ulaw2lin�scCsLddl}t|d�s'd|_n|j|d|j�\}|_|S(Ni����t_adpcmstatei(R}thasattrR<R�t	adpcm2lin(RTRR}((s/usr/lib64/python2.7/aifc.pyt
_adpcm2lin�scCspt|�|_t|�|_t|�dd|_tt|��|_|j|j|_|j	rZd}|j
dkr�d}dGHd|_
n|jd�|_|rt
|jjd��}|d@dkr�|d}n|j
||_
|jjd	d�nt|�|_|jd
krl|jdkrryd	dl}Wntk
rUqrX|j|_d|_dSnyd	dl}Wnitk
r�|jdkr�y)d	dl}|j|_d|_dSWq�tk
r�q�Xntd�nX|jdkr	|j}n$|jdkr$|j}n	td�|j|�|_|j|_d|_qlnd
|_d|_dS(NiiiiisWarning: bad COMM chunk sizeiii����tNONEtG722itULAWtulaws#cannot read compressed AIFF-C filestALAWtalawsunsupported compression typesnot compressed(R�R�(R�R�(R�R�(R
RPRRfRMtintRRORvRCt	chunksizeRRjRR
RuRRlR}tImportErrorR�R>RJRRt	G711_ULAWt	G711_ALAWtOpenDecompressorR=R|(RTRUtkludgeRR}RJtscheme((s/usr/lib64/python2.7/aifc.pyRG�sd	

	
	

		cCs�t|�}ygx`t|�D]R}t|�}t|�}t|�}|sR|r|jj|||f�qqWWnKtk
r�dGt|j�Gt|j�dkr�dGndGdG|GHnXdS(Ns!Warning: MARK chunk contains onlyiRqtmarkerss
instead of(R
trangeRRR?tappendR	R%(RTRUtnmarkerstiRpRstname((s/usr/lib64/python2.7/aifc.pyRHs$

N(RRR<RARYR_R`RaR^RdReRgRhRiRkRmRnRoRrRtRxR|RR�RGRH(((s/usr/lib64/python2.7/aifc.pyR3�s.$	5																						<t
Aifc_writecBs@eZd"Zd�Zd�Zd�Zd�Zd�Zd�Z	d�Z
d�Zd�Zd	�Z
d
�Zd�Zd�Zd
�Zd�Zd�Zd�Zd�Zd�Zd�Zd�Zd�Zd�Zd�Zd�Zd�Zd�Zd�Zd�Z d�Z!d�Z"d�Z#d �Z$d!�Z%RS(#cCsft|t�r*|}tj|d�}nd}|j|�|ddkrYd|_n	d|_dS(Ntwbs???i����s.aiffii(R[R\R]RRYRC(RTRtfilename((s/usr/lib64/python2.7/aifc.pyR_@s
cCs�||_t|_d|_d|_d|_d|_d|_d|_	d|_
d|_d|_d|_
d|_g|_d|_d|_dS(NR�snot compressedii(RAt
_AIFC_versionR;RjRlR<t_compR>RPRMRORft_nframeswrittent_datawrittent_datalengthR?t_marklengthRC(RTR
((s/usr/lib64/python2.7/aifc.pyRYMs 															cCs|jr|j�ndS(N(RAR^(RT((s/usr/lib64/python2.7/aifc.pyt__del___s	cCs"|jrtd�nd|_dS(Ns0cannot change parameters after starting to writei(R�RRC(RT((s/usr/lib64/python2.7/aifc.pytaifffs	cCs"|jrtd�nd|_dS(Ns0cannot change parameters after starting to writei(R�RRC(RT((s/usr/lib64/python2.7/aifc.pytaifcks	cCs:|jrtd�n|dkr-td�n||_dS(Ns0cannot change parameters after starting to writeisbad # of channels(R�RRP(RTt	nchannels((s/usr/lib64/python2.7/aifc.pytsetnchannelsps
	cCs|jstd�n|jS(Nsnumber of channels not set(RPR(RT((s/usr/lib64/python2.7/aifc.pyRews	cCsF|jrtd�n|dks-|dkr9td�n||_dS(Ns0cannot change parameters after starting to writeiisbad sample width(R�RRM(RTt	sampwidth((s/usr/lib64/python2.7/aifc.pytsetsampwidth|s
	cCs|jstd�n|jS(Nssample width not set(RMR(RT((s/usr/lib64/python2.7/aifc.pyRh�s	cCs:|jrtd�n|dkr-td�n||_dS(Ns0cannot change parameters after starting to writeisbad frame rate(R�RRO(RTt	framerate((s/usr/lib64/python2.7/aifc.pytsetframerate�s
	cCs|jstd�n|jS(Nsframe rate not set(ROR(RT((s/usr/lib64/python2.7/aifc.pyRi�s	cCs"|jrtd�n||_dS(Ns0cannot change parameters after starting to write(R�RRf(RTRw((s/usr/lib64/python2.7/aifc.pyt
setnframes�s	cCs|jS(N(R�(RT((s/usr/lib64/python2.7/aifc.pyRg�scCsC|jrtd�n|d	kr-td�n||_||_dS(
Ns0cannot change parameters after starting to writeR�R�R�R�R�R�sunsupported compression type(R�R�R�R�R�R�(R�RRjRl(RTtcomptypetcompname((s/usr/lib64/python2.7/aifc.pytsetcomptype�s		cCs|jS(N(Rj(RT((s/usr/lib64/python2.7/aifc.pyRk�scCs|jS(N(Rl(RT((s/usr/lib64/python2.7/aifc.pyRm�scCs�|\}}}}}}|jr-td�n|d	krEtd�n|j|�|j|�|j|�|j|�|j||�dS(
Ns0cannot change parameters after starting to writeR�R�R�R�R�R�sunsupported compression type(R�R�R�R�R�R�(R�RR�R�R�R�R�(RTtinfoR�R�R�RwR�R�((s/usr/lib64/python2.7/aifc.pyt	setparams�s	



cCsR|js|js|jr*td�n|j|j|j|j|j|jfS(Nsnot all parameters set(RPRMRORRfRjRl(RT((s/usr/lib64/python2.7/aifc.pyRn�scCs�|dkrtd�n|dkr0td�nt|�td�krTtd�nxNtt|j��D]7}||j|dkrj|||f|j|<dSqjW|jj|||f�dS(Nismarker ID must be > 0smarker position must be >= 0Rsmarker name must be a string(RttypeR�R%R?R�(RTRpRsR�R�((s/usr/lib64/python2.7/aifc.pytsetmark�scCs<x%|jD]}||dkr
|Sq
Wtd|f�dS(Nismarker %r does not exist(R?R(RTRpRq((s/usr/lib64/python2.7/aifc.pyRr�scCs t|j�dkrdS|jS(Ni(R%R?R<(RT((s/usr/lib64/python2.7/aifc.pyRo�scCs|jS(N(R�(RT((s/usr/lib64/python2.7/aifc.pyRd�scCs�|jt|��t|�|j|j}|jrH|j|�}n|jj|�|j||_|jt|�|_dS(N(	t_ensure_header_writtenR%RMRPR>RARR�R�(RTRRw((s/usr/lib64/python2.7/aifc.pytwriteframesraw�s	cCsB|j|�|j|jks1|j|jkr>|j�ndS(N(R�R�RfR�R�t_patchheader(RTR((s/usr/lib64/python2.7/aifc.pytwriteframes�s
cCs�|jdkrdSz�|jd�|jd@rY|jjtd��|jd|_n|j�|j|jks�|j	|jks�|j
r�|j�n|jr�|jj
�d|_nWdd|_|j}d|_|j�XdS(Nii(RAR<R�R�RR't
_writemarkersR�RfR�R�R�R�tCloseCompressorR>R^(RTR((s/usr/lib64/python2.7/aifc.pyR^�s&


	
	
			cCs^ddl}|jj|jt|��}|jj|jt|��}|jj|j|�S(Ni����(RJR�RyRzR%tCOMPRESSED_BUFFER_SIZEtCompressRf(RTRRJR((s/usr/lib64/python2.7/aifc.pyt
_comp_datascCsddl}|j|d�S(Ni����i(R}tlin2ulaw(RTRR}((s/usr/lib64/python2.7/aifc.pyt	_lin2ulaw
scCsLddl}t|d�s'd|_n|j|d|j�\}|_|S(Ni����R�i(R}R�R<R�t	lin2adpcm(RTRR}((s/usr/lib64/python2.7/aifc.pyt
_lin2adpcmscCs�|js�|jdkrK|js-d|_n|jdkrKtd�qKn|jdkr�|jsod|_n|jdkr�td�q�n|js�td	�n|js�td
�n|js�td�n|j|�ndS(
NR�R�R�R�is9sample width must be 2 when compressing with ULAW or ALAWR�s:sample width must be 2 when compressing with G7.22 (ADPCM)s# channels not specifiedssample width not specifiedssampling rate not specified(R�R�R�R�(R�RjRMRRPROt
_write_header(RTtdatasize((s/usr/lib64/python2.7/aifc.pyR�s$						c
Cs�|jdkr|j|_dSyddl}Wn`tk
r�|jdkr�y ddl}|j|_dSWq�tk
r�q�Xntd�nX|jdkr�|j}n$|jdkr�|j	}n	td�|j
|�|_|jd	|j
|jd
|j|j|jd|jdg
}|jdkr?|j|d<n(|jd
kr^|j|d<n	td�|jj|�|jjd	d�}|j|_dS(NR�i����R�R�s$cannot write compressed AIFF-C filesR�R�sunsupported compression typeiiidiis$cannot compress more than 2 channelsR(R�R�(R�R�(R�R�(RjR�R>RJR�R}R�RR�R�tOpenCompressorR�RKRLRMRNRORzR�RPRQRRRSR�R�(RTRJR}R�RXR((s/usr/lib64/python2.7/aifc.pyt_init_compression-sB


				cCs'|jr%|jdkr%|j�n|jjd�|jsX||j|j|_n|j|j|j|_|jd@r�|jd|_n|jr&|jdkr�|jd|_|jd@r#|jd|_q#q&|jd	kr&|jd
d|_|jd@r#|jd|_q#q&ny|jj	�|_
Wn ttfk
r^d|_
nX|j|j�}|jr�|jjd�|jjd
�t|jd�t|j|j�n|jjd�|jjd�t|j|�t|j|j�|j
dk	r'|jj	�|_nt|j|j�|jdkr\t|jd�nt|j|jd�t|j|j�|jr�|jj|j�t|j|j�n|jjd�|j
dk	r�|jj	�|_nt|j|jd�t|jd�t|jd�dS(NR�R4iR�R�R�R�iR�iiR6R9R5R7iR8i(R�R�R�R�(R�R�R�R�R�(RCRjR�RARRfRPRMR�Rdt_form_length_postAttributeErrortIOErrorR<t_write_form_lengthR#R;R t_nframes_posR1ROR)Rlt_ssnd_length_pos(RTt
initlengtht
commlength((s/usr/lib64/python2.7/aifc.pyR�Ss^
	
	


		cCsw|jr<d	t|j�}|d@r3|d}nd}nd}d}t|jd||jd|d|�|S(
Niiiiiiiii(RCR%RlR#RAR�(RTt
datalengthR�t
verslength((s/usr/lib64/python2.7/aifc.pyR��s	

		"cCs6|jj�}|jd@rB|jd}|jjtd��n	|j}||jkr�|j|jkr�|jdkr�|jj	|d�dS|jj	|j
d�|j|�}|jj	|jd�t
|j|j�|jj	|jd�t
|j|d�|jj	|d�|j|_||_dS(Niii(RARdR�RR'R�RfR�R�RuR�R�R�R#R�(RTtcurposR�R((s/usr/lib64/python2.7/aifc.pyR��s&

	cCst|j�dkrdS|jjd�d}x[|jD]P}|\}}}|t|�dd}t|�d@dkr9|d}q9q9Wt|j|�|d|_t|jt|j��xP|jD]E}|\}}}t|j|�t|j|�t|j|�q�WdS(NiR:iiii(R%R?RARR#R�R R)(RTRRqRpRsR�((s/usr/lib64/python2.7/aifc.pyR��s"
N(&RRR<RAR_RYR�R�R�R�ReR�RhR�RiR�RgR�RkRmR�RnR�RrRoRdR�R�R^R�R�R�R�R�R�R�R�R�(((s/usr/lib64/python2.7/aifc.pyR� sF	
																		
												&	3	
	cCsi|dkr0t|d�r'|j}q0d}n|dkrFt|�S|dkr\t|�Std�dS(	NtmodeRZtrtwR�s$mode must be 'r', 'rb', 'w', or 'wb'(R�RZ(R�R�(R<R�R�R3R�R(RR�((s/usr/lib64/python2.7/aifc.pyR�s	

t__main__is/usr/demos/data/audio/bach.aiffR�tReadingsnchannels =snframes   =ssampwidth =sframerate =scomptype  =scompname  =itWritingR�isDone.(/t__doc__RR]t__all__t	ExceptionRR�RRR
RRRRR R!R"R#R)R1RUR2R3R�R<RRRtsystargvR�tfnRReRgRhRiRkRmtgntgR�RnRxRR�R^(((s/usr/lib64/python2.7/aifc.pyt<module>�sj					
							!�"��

	

	#! /usr/bin/python2.7
# -*- coding: latin-1 -*-
"""Generate Python documentation in HTML or text for interactive use.

In the Python interpreter, do "from pydoc import help" to provide online
help.  Calling help(thing) on a Python object documents the object.

Or, at the shell command line outside of Python:

Run "pydoc <name>" to show documentation on something.  <name> may be
the name of a function, module, package, or a dotted reference to a
class or function within a module or module in a package.  If the
argument contains a path segment delimiter (e.g. slash on Unix,
backslash on Windows) it is treated as the path to a Python source file.

Run "pydoc -k <keyword>" to search for a keyword in the synopsis lines
of all available modules.

Run "pydoc -p <port>" to start an HTTP server on a given port on the
local machine to generate documentation web pages.  Port number 0 can be
used to get an arbitrary unused port.

Run "pydoc -w <name>" to write out the HTML documentation for a module
to a file named "<name>.html".

Module docs for core modules are assumed to be in

    https://docs.python.org/library/

This can be overridden by setting the PYTHONDOCS environment variable
to a different URL or to a local directory containing the Library
Reference Manual pages.
"""

__author__ = "Ka-Ping Yee <ping@lfw.org>"
__date__ = "26 February 2001"

__version__ = "$Revision: 88564 $"
__credits__ = """Guido van Rossum, for an excellent programming language.
Tommy Burnette, the original creator of manpy.
Paul Prescod, for all his work on onlinehelp.
Richard Chamberlain, for the first implementation of textdoc.
"""

# Known bugs that can't be fixed here:
#   - imp.load_module() cannot be prevented from clobbering existing
#     loaded modules, so calling synopsis() on a binary module file
#     changes the contents of any existing module with the same name.
#   - If the __file__ attribute on a module is a relative path and
#     the current directory is changed with os.chdir(), an incorrect
#     path will be displayed.

import sys, imp, os, re, types, inspect, __builtin__, pkgutil, warnings
from repr import Repr
from string import expandtabs, find, join, lower, split, strip, rfind, rstrip
from traceback import extract_tb
try:
    from collections import deque
except ImportError:
    # Python 2.3 compatibility
    class deque(list):
        def popleft(self):
            return self.pop(0)

# --------------------------------------------------------- common routines

def pathdirs():
    """Convert sys.path into a list of absolute, existing, unique paths."""
    dirs = []
    normdirs = []
    for dir in sys.path:
        dir = os.path.abspath(dir or '.')
        normdir = os.path.normcase(dir)
        if normdir not in normdirs and os.path.isdir(dir):
            dirs.append(dir)
            normdirs.append(normdir)
    return dirs

def getdoc(object):
    """Get the doc string or comments for an object."""
    result = inspect.getdoc(object) or inspect.getcomments(object)
    result = _encode(result)
    return result and re.sub('^ *\n', '', rstrip(result)) or ''

def splitdoc(doc):
    """Split a doc string into a synopsis line (if any) and the rest."""
    lines = split(strip(doc), '\n')
    if len(lines) == 1:
        return lines[0], ''
    elif len(lines) >= 2 and not rstrip(lines[1]):
        return lines[0], join(lines[2:], '\n')
    return '', join(lines, '\n')

def classname(object, modname):
    """Get a class name and qualify it with a module name if necessary."""
    name = object.__name__
    if object.__module__ != modname:
        name = object.__module__ + '.' + name
    return name

def isdata(object):
    """Check if an object is of a type that probably means it's data."""
    return not (inspect.ismodule(object) or inspect.isclass(object) or
                inspect.isroutine(object) or inspect.isframe(object) or
                inspect.istraceback(object) or inspect.iscode(object))

def replace(text, *pairs):
    """Do a series of global replacements on a string."""
    while pairs:
        text = join(split(text, pairs[0]), pairs[1])
        pairs = pairs[2:]
    return text

def cram(text, maxlen):
    """Omit part of a string if needed to make it fit in a maximum length."""
    if len(text) > maxlen:
        pre = max(0, (maxlen-3)//2)
        post = max(0, maxlen-3-pre)
        return text[:pre] + '...' + text[len(text)-post:]
    return text

_re_stripid = re.compile(r' at 0x[0-9a-f]{6,16}(>+)$', re.IGNORECASE)
def stripid(text):
    """Remove the hexadecimal id from a Python object representation."""
    # The behaviour of %p is implementation-dependent in terms of case.
    return _re_stripid.sub(r'\1', text)

def _is_some_method(obj):
    return inspect.ismethod(obj) or inspect.ismethoddescriptor(obj)

def allmethods(cl):
    methods = {}
    for key, value in inspect.getmembers(cl, _is_some_method):
        methods[key] = 1
    for base in cl.__bases__:
        methods.update(allmethods(base)) # all your base are belong to us
    for key in methods.keys():
        methods[key] = getattr(cl, key)
    return methods

def _split_list(s, predicate):
    """Split sequence s via predicate, and return pair ([true], [false]).

    The return value is a 2-tuple of lists,
        ([x for x in s if predicate(x)],
         [x for x in s if not predicate(x)])
    """

    yes = []
    no = []
    for x in s:
        if predicate(x):
            yes.append(x)
        else:
            no.append(x)
    return yes, no

def visiblename(name, all=None, obj=None):
    """Decide whether to show documentation on a variable."""
    # Certain special names are redundant.
    _hidden_names = ('__builtins__', '__doc__', '__file__', '__path__',
                     '__module__', '__name__', '__slots__', '__package__')
    if name in _hidden_names: return 0
    # Private names are hidden, but special names are displayed.
    if name.startswith('__') and name.endswith('__'): return 1
    # Namedtuples have public fields and methods with a single leading underscore
    if name.startswith('_') and hasattr(obj, '_fields'):
        return 1
    if all is not None:
        # only document that which the programmer exported in __all__
        return name in all
    else:
        return not name.startswith('_')

def classify_class_attrs(object):
    """Wrap inspect.classify_class_attrs, with fixup for data descriptors."""
    def fixup(data):
        name, kind, cls, value = data
        if inspect.isdatadescriptor(value):
            kind = 'data descriptor'
        return name, kind, cls, value
    return map(fixup, inspect.classify_class_attrs(object))

# ----------------------------------------------------- Unicode support helpers

try:
    _unicode = unicode
except NameError:
    # If Python is built without Unicode support, the unicode type
    # will not exist. Fake one that nothing will match, and make
    # the _encode function that do nothing.
    class _unicode(object):
        pass
    _encoding = 'ascii'
    def _encode(text, encoding='ascii'):
        return text
else:
    import locale
    _encoding = locale.getpreferredencoding()

    def _encode(text, encoding=None):
        if isinstance(text, unicode):
            return text.encode(encoding or _encoding, 'xmlcharrefreplace')
        else:
            return text

def _binstr(obj):
    # Ensure that we have an encoded (binary) string representation of obj,
    # even if it is a unicode string.
    if isinstance(obj, _unicode):
        return obj.encode(_encoding, 'xmlcharrefreplace')
    return str(obj)

# ----------------------------------------------------- module manipulation

def ispackage(path):
    """Guess whether a path refers to a package directory."""
    if os.path.isdir(path):
        for ext in ('.py', '.pyc', '.pyo'):
            if os.path.isfile(os.path.join(path, '__init__' + ext)):
                return True
    return False

def source_synopsis(file):
    line = file.readline()
    while line[:1] == '#' or not strip(line):
        line = file.readline()
        if not line: break
    line = strip(line)
    if line[:4] == 'r"""': line = line[1:]
    if line[:3] == '"""':
        line = line[3:]
        if line[-1:] == '\\': line = line[:-1]
        while not strip(line):
            line = file.readline()
            if not line: break
        result = strip(split(line, '"""')[0])
    else: result = None
    return result

def synopsis(filename, cache={}):
    """Get the one-line summary out of a module file."""
    mtime = os.stat(filename).st_mtime
    lastupdate, result = cache.get(filename, (None, None))
    if lastupdate is None or lastupdate < mtime:
        info = inspect.getmoduleinfo(filename)
        try:
            file = open(filename)
        except IOError:
            # module can't be opened, so skip it
            return None
        if info and 'b' in info[2]: # binary modules have to be imported
            try: module = imp.load_module('__temp__', file, filename, info[1:])
            except: return None
            result = module.__doc__.splitlines()[0] if module.__doc__ else None
            del sys.modules['__temp__']
        else: # text modules can be directly examined
            result = source_synopsis(file)
            file.close()
        cache[filename] = (mtime, result)
    return result

class ErrorDuringImport(Exception):
    """Errors that occurred while trying to import something to document it."""
    def __init__(self, filename, exc_info):
        exc, value, tb = exc_info
        self.filename = filename
        self.exc = exc
        self.value = value
        self.tb = tb

    def __str__(self):
        exc = self.exc
        if type(exc) is types.ClassType:
            exc = exc.__name__
        return 'problem in %s - %s: %s' % (self.filename, exc, self.value)

def importfile(path):
    """Import a Python source file or compiled file given its path."""
    magic = imp.get_magic()
    file = open(path, 'r')
    if file.read(len(magic)) == magic:
        kind = imp.PY_COMPILED
    else:
        kind = imp.PY_SOURCE
    file.close()
    filename = os.path.basename(path)
    name, ext = os.path.splitext(filename)
    file = open(path, 'r')
    try:
        module = imp.load_module(name, file, path, (ext, 'r', kind))
    except:
        raise ErrorDuringImport(path, sys.exc_info())
    file.close()
    return module

def safeimport(path, forceload=0, cache={}):
    """Import a module; handle errors; return None if the module isn't found.

    If the module *is* found but an exception occurs, it's wrapped in an
    ErrorDuringImport exception and reraised.  Unlike __import__, if a
    package path is specified, the module at the end of the path is returned,
    not the package at the beginning.  If the optional 'forceload' argument
    is 1, we reload the module from disk (unless it's a dynamic extension)."""
    try:
        # If forceload is 1 and the module has been previously loaded from
        # disk, we always have to reload the module.  Checking the file's
        # mtime isn't good enough (e.g. the module could contain a class
        # that inherits from another module that has changed).
        if forceload and path in sys.modules:
            if path not in sys.builtin_module_names:
                # Avoid simply calling reload() because it leaves names in
                # the currently loaded module lying around if they're not
                # defined in the new source file.  Instead, remove the
                # module from sys.modules and re-import.  Also remove any
                # submodules because they won't appear in the newly loaded
                # module's namespace if they're already in sys.modules.
                subs = [m for m in sys.modules if m.startswith(path + '.')]
                for key in [path] + subs:
                    # Prevent garbage collection.
                    cache[key] = sys.modules[key]
                    del sys.modules[key]
        module = __import__(path)
    except:
        # Did the error occur before or after the module was found?
        (exc, value, tb) = info = sys.exc_info()
        if path in sys.modules:
            # An error occurred while executing the imported module.
            raise ErrorDuringImport(sys.modules[path].__file__, info)
        elif exc is SyntaxError:
            # A SyntaxError occurred before we could execute the module.
            raise ErrorDuringImport(value.filename, info)
        elif exc is ImportError and extract_tb(tb)[-1][2]=='safeimport':
            # The import error occurred directly in this function,
            # which means there is no such module in the path.
            return None
        else:
            # Some other error occurred during the importing process.
            raise ErrorDuringImport(path, sys.exc_info())
    for part in split(path, '.')[1:]:
        try: module = getattr(module, part)
        except AttributeError: return None
    return module

# ---------------------------------------------------- formatter base class

class Doc:
    def document(self, object, name=None, *args):
        """Generate documentation for an object."""
        args = (object, name) + args
        # 'try' clause is to attempt to handle the possibility that inspect
        # identifies something in a way that pydoc itself has issues handling;
        # think 'super' and how it is a descriptor (which raises the exception
        # by lacking a __name__ attribute) and an instance.
        if inspect.isgetsetdescriptor(object): return self.docdata(*args)
        if inspect.ismemberdescriptor(object): return self.docdata(*args)
        try:
            if inspect.ismodule(object): return self.docmodule(*args)
            if inspect.isclass(object): return self.docclass(*args)
            if inspect.isroutine(object): return self.docroutine(*args)
        except AttributeError:
            pass
        if isinstance(object, property): return self.docproperty(*args)
        return self.docother(*args)

    def fail(self, object, name=None, *args):
        """Raise an exception for unimplemented types."""
        message = "don't know how to document object%s of type %s" % (
            name and ' ' + repr(name), type(object).__name__)
        raise TypeError, message

    docmodule = docclass = docroutine = docother = docproperty = docdata = fail

    def getdocloc(self, object,
                  basedir=os.path.join(sys.exec_prefix, "lib",
                                       "python"+sys.version[0:3])):
        """Return the location of module docs or None"""

        try:
            file = inspect.getabsfile(object)
        except TypeError:
            file = '(built-in)'

        docloc = os.environ.get("PYTHONDOCS",
                                "https://docs.python.org/library")
        basedir = os.path.normcase(basedir)
        if (isinstance(object, type(os)) and
            (object.__name__ in ('errno', 'exceptions', 'gc', 'imp',
                                 'marshal', 'posix', 'signal', 'sys',
                                 'thread', 'zipimport') or
             (file.startswith(basedir) and
              not file.startswith(os.path.join(basedir, 'site-packages')))) and
            object.__name__ not in ('xml.etree', 'test.pydoc_mod')):
            if docloc.startswith(("http://", "https://")):
                docloc = "%s/%s" % (docloc.rstrip("/"), object.__name__.lower())
            else:
                docloc = os.path.join(docloc, object.__name__.lower() + ".html")
        else:
            docloc = None
        return docloc

# -------------------------------------------- HTML documentation generator

class HTMLRepr(Repr):
    """Class for safely making an HTML representation of a Python object."""
    def __init__(self):
        Repr.__init__(self)
        self.maxlist = self.maxtuple = 20
        self.maxdict = 10
        self.maxstring = self.maxother = 100

    def escape(self, text):
        return replace(text, '&', '&amp;', '<', '&lt;', '>', '&gt;')

    def repr(self, object):
        return Repr.repr(self, object)

    def repr1(self, x, level):
        if hasattr(type(x), '__name__'):
            methodname = 'repr_' + join(split(type(x).__name__), '_')
            if hasattr(self, methodname):
                return getattr(self, methodname)(x, level)
        return self.escape(cram(stripid(repr(x)), self.maxother))

    def repr_string(self, x, level):
        test = cram(x, self.maxstring)
        testrepr = repr(test)
        if '\\' in test and '\\' not in replace(testrepr, r'\\', ''):
            # Backslashes are only literal in the string and are never
            # needed to make any special characters, so show a raw string.
            return 'r' + testrepr[0] + self.escape(test) + testrepr[0]
        return re.sub(r'((\\[\\abfnrtv\'"]|\\[0-9]..|\\x..|\\u....)+)',
                      r'<font color="#c040c0">\1</font>',
                      self.escape(testrepr))

    repr_str = repr_string

    def repr_instance(self, x, level):
        try:
            return self.escape(cram(stripid(repr(x)), self.maxstring))
        except:
            return self.escape('<%s instance>' % x.__class__.__name__)

    repr_unicode = repr_string

class HTMLDoc(Doc):
    """Formatter class for HTML documentation."""

    # ------------------------------------------- HTML formatting utilities

    _repr_instance = HTMLRepr()
    repr = _repr_instance.repr
    escape = _repr_instance.escape

    def page(self, title, contents):
        """Format an HTML page."""
        return _encode('''
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<html><head><title>Python: %s</title>
<meta charset="utf-8">
</head><body bgcolor="#f0f0f8">
%s
</body></html>''' % (title, contents), 'ascii')

    def heading(self, title, fgcol, bgcol, extras=''):
        """Format a page heading."""
        return '''
<table width="100%%" cellspacing=0 cellpadding=2 border=0 summary="heading">
<tr bgcolor="%s">
<td valign=bottom>&nbsp;<br>
<font color="%s" face="helvetica, arial">&nbsp;<br>%s</font></td
><td align=right valign=bottom
><font color="%s" face="helvetica, arial">%s</font></td></tr></table>
    ''' % (bgcol, fgcol, title, fgcol, extras or '&nbsp;')

    def section(self, title, fgcol, bgcol, contents, width=6,
                prelude='', marginalia=None, gap='&nbsp;'):
        """Format a section with a heading."""
        if marginalia is None:
            marginalia = '<tt>' + '&nbsp;' * width + '</tt>'
        result = '''<p>
<table width="100%%" cellspacing=0 cellpadding=2 border=0 summary="section">
<tr bgcolor="%s">
<td colspan=3 valign=bottom>&nbsp;<br>
<font color="%s" face="helvetica, arial">%s</font></td></tr>
    ''' % (bgcol, fgcol, title)
        if prelude:
            result = result + '''
<tr bgcolor="%s"><td rowspan=2>%s</td>
<td colspan=2>%s</td></tr>
<tr><td>%s</td>''' % (bgcol, marginalia, prelude, gap)
        else:
            result = result + '''
<tr><td bgcolor="%s">%s</td><td>%s</td>''' % (bgcol, marginalia, gap)

        return result + '\n<td width="100%%">%s</td></tr></table>' % contents

    def bigsection(self, title, *args):
        """Format a section with a big heading."""
        title = '<big><strong>%s</strong></big>' % title
        return self.section(title, *args)

    def preformat(self, text):
        """Format literal preformatted text."""
        text = self.escape(expandtabs(text))
        return replace(text, '\n\n', '\n \n', '\n\n', '\n \n',
                             ' ', '&nbsp;', '\n', '<br>\n')

    def multicolumn(self, list, format, cols=4):
        """Format a list of items into a multi-column list."""
        result = ''
        rows = (len(list)+cols-1)//cols
        for col in range(cols):
            result = result + '<td width="%d%%" valign=top>' % (100//cols)
            for i in range(rows*col, rows*col+rows):
                if i < len(list):
                    result = result + format(list[i]) + '<br>\n'
            result = result + '</td>'
        return '<table width="100%%" summary="list"><tr>%s</tr></table>' % result

    def grey(self, text): return '<font color="#909090">%s</font>' % text

    def namelink(self, name, *dicts):
        """Make a link for an identifier, given name-to-URL mappings."""
        for dict in dicts:
            if name in dict:
                return '<a href="%s">%s</a>' % (dict[name], name)
        return name

    def classlink(self, object, modname):
        """Make a link for a class."""
        name, module = object.__name__, sys.modules.get(object.__module__)
        if hasattr(module, name) and getattr(module, name) is object:
            return '<a href="%s.html#%s">%s</a>' % (
                module.__name__, name, classname(object, modname))
        return classname(object, modname)

    def modulelink(self, object):
        """Make a link for a module."""
        return '<a href="%s.html">%s</a>' % (object.__name__, object.__name__)

    def modpkglink(self, data):
        """Make a link for a module or package to display in an index."""
        name, path, ispackage, shadowed = data
        if shadowed:
            return self.grey(name)
        if path:
            url = '%s.%s.html' % (path, name)
        else:
            url = '%s.html' % name
        if ispackage:
            text = '<strong>%s</strong>&nbsp;(package)' % name
        else:
            text = name
        return '<a href="%s">%s</a>' % (url, text)

    def markup(self, text, escape=None, funcs={}, classes={}, methods={}):
        """Mark up some plain text, given a context of symbols to look for.
        Each context dictionary maps object names to anchor names."""
        escape = escape or self.escape
        results = []
        here = 0
        pattern = re.compile(r'\b((http|ftp)://\S+[\w/]|'
                                r'RFC[- ]?(\d+)|'
                                r'PEP[- ]?(\d+)|'
                                r'(self\.)?(\w+))')
        while True:
            match = pattern.search(text, here)
            if not match: break
            start, end = match.span()
            results.append(escape(text[here:start]))

            all, scheme, rfc, pep, selfdot, name = match.groups()
            if scheme:
                url = escape(all).replace('"', '&quot;')
                results.append('<a href="%s">%s</a>' % (url, url))
            elif rfc:
                url = 'http://www.rfc-editor.org/rfc/rfc%d.txt' % int(rfc)
                results.append('<a href="%s">%s</a>' % (url, escape(all)))
            elif pep:
                url = 'http://www.python.org/dev/peps/pep-%04d/' % int(pep)
                results.append('<a href="%s">%s</a>' % (url, escape(all)))
            elif selfdot:
                # Create a link for methods like 'self.method(...)'
                # and use <strong> for attributes like 'self.attr'
                if text[end:end+1] == '(':
                    results.append('self.' + self.namelink(name, methods))
                else:
                    results.append('self.<strong>%s</strong>' % name)
            elif text[end:end+1] == '(':
                results.append(self.namelink(name, methods, funcs, classes))
            else:
                results.append(self.namelink(name, classes))
            here = end
        results.append(escape(text[here:]))
        return join(results, '')

    # ---------------------------------------------- type-specific routines

    def formattree(self, tree, modname, parent=None):
        """Produce HTML for a class tree as given by inspect.getclasstree()."""
        result = ''
        for entry in tree:
            if type(entry) is type(()):
                c, bases = entry
                result = result + '<dt><font face="helvetica, arial">'
                result = result + self.classlink(c, modname)
                if bases and bases != (parent,):
                    parents = []
                    for base in bases:
                        parents.append(self.classlink(base, modname))
                    result = result + '(' + join(parents, ', ') + ')'
                result = result + '\n</font></dt>'
            elif type(entry) is type([]):
                result = result + '<dd>\n%s</dd>\n' % self.formattree(
                    entry, modname, c)
        return '<dl>\n%s</dl>\n' % result

    def docmodule(self, object, name=None, mod=None, *ignored):
        """Produce HTML documentation for a module object."""
        name = object.__name__ # ignore the passed-in name
        try:
            all = object.__all__
        except AttributeError:
            all = None
        parts = split(name, '.')
        links = []
        for i in range(len(parts)-1):
            links.append(
                '<a href="%s.html"><font color="#ffffff">%s</font></a>' %
                (join(parts[:i+1], '.'), parts[i]))
        linkedname = join(links + parts[-1:], '.')
        head = '<big><big><strong>%s</strong></big></big>' % linkedname
        try:
            path = inspect.getabsfile(object)
            url = path
            if sys.platform == 'win32':
                import nturl2path
                url = nturl2path.pathname2url(path)
            filelink = '<a href="file:%s">%s</a>' % (url, path)
        except TypeError:
            filelink = '(built-in)'
        info = []
        if hasattr(object, '__version__'):
            version = _binstr(object.__version__)
            if version[:11] == '$' + 'Revision: ' and version[-1:] == '$':
                version = strip(version[11:-1])
            info.append('version %s' % self.escape(version))
        if hasattr(object, '__date__'):
            info.append(self.escape(_binstr(object.__date__)))
        if info:
            head = head + ' (%s)' % join(info, ', ')
        docloc = self.getdocloc(object)
        if docloc is not None:
            docloc = '<br><a href="%(docloc)s">Module Docs</a>' % locals()
        else:
            docloc = ''
        result = self.heading(
            head, '#ffffff', '#7799ee',
            '<a href=".">index</a><br>' + filelink + docloc)

        modules = inspect.getmembers(object, inspect.ismodule)

        classes, cdict = [], {}
        for key, value in inspect.getmembers(object, inspect.isclass):
            # if __all__ exists, believe it.  Otherwise use old heuristic.
            if (all is not None or
                (inspect.getmodule(value) or object) is object):
                if visiblename(key, all, object):
                    classes.append((key, value))
                    cdict[key] = cdict[value] = '#' + key
        for key, value in classes:
            for base in value.__bases__:
                key, modname = base.__name__, base.__module__
                module = sys.modules.get(modname)
                if modname != name and module and hasattr(module, key):
                    if getattr(module, key) is base:
                        if not key in cdict:
                            cdict[key] = cdict[base] = modname + '.html#' + key
        funcs, fdict = [], {}
        for key, value in inspect.getmembers(object, inspect.isroutine):
            # if __all__ exists, believe it.  Otherwise use old heuristic.
            if (all is not None or
                inspect.isbuiltin(value) or inspect.getmodule(value) is object):
                if visiblename(key, all, object):
                    funcs.append((key, value))
                    fdict[key] = '#-' + key
                    if inspect.isfunction(value): fdict[value] = fdict[key]
        data = []
        for key, value in inspect.getmembers(object, isdata):
            if visiblename(key, all, object):
                data.append((key, value))

        doc = self.markup(getdoc(object), self.preformat, fdict, cdict)
        doc = doc and '<tt>%s</tt>' % doc
        result = result + '<p>%s</p>\n' % doc

        if hasattr(object, '__path__'):
            modpkgs = []
            for importer, modname, ispkg in pkgutil.iter_modules(object.__path__):
                modpkgs.append((modname, name, ispkg, 0))
            modpkgs.sort()
            contents = self.multicolumn(modpkgs, self.modpkglink)
            result = result + self.bigsection(
                'Package Contents', '#ffffff', '#aa55cc', contents)
        elif modules:
            contents = self.multicolumn(
                modules, lambda key_value, s=self: s.modulelink(key_value[1]))
            result = result + self.bigsection(
                'Modules', '#ffffff', '#aa55cc', contents)

        if classes:
            classlist = map(lambda key_value: key_value[1], classes)
            contents = [
                self.formattree(inspect.getclasstree(classlist, 1), name)]
            for key, value in classes:
                contents.append(self.document(value, key, name, fdict, cdict))
            result = result + self.bigsection(
                'Classes', '#ffffff', '#ee77aa', join(contents))
        if funcs:
            contents = []
            for key, value in funcs:
                contents.append(self.document(value, key, name, fdict, cdict))
            result = result + self.bigsection(
                'Functions', '#ffffff', '#eeaa77', join(contents))
        if data:
            contents = []
            for key, value in data:
                contents.append(self.document(value, key))
            result = result + self.bigsection(
                'Data', '#ffffff', '#55aa55', join(contents, '<br>\n'))
        if hasattr(object, '__author__'):
            contents = self.markup(_binstr(object.__author__), self.preformat)
            result = result + self.bigsection(
                'Author', '#ffffff', '#7799ee', contents)
        if hasattr(object, '__credits__'):
            contents = self.markup(_binstr(object.__credits__), self.preformat)
            result = result + self.bigsection(
                'Credits', '#ffffff', '#7799ee', contents)

        return result

    def docclass(self, object, name=None, mod=None, funcs={}, classes={},
                 *ignored):
        """Produce HTML documentation for a class object."""
        realname = object.__name__
        name = name or realname
        bases = object.__bases__

        contents = []
        push = contents.append

        # Cute little class to pump out a horizontal rule between sections.
        class HorizontalRule:
            def __init__(self):
                self.needone = 0
            def maybe(self):
                if self.needone:
                    push('<hr>\n')
                self.needone = 1
        hr = HorizontalRule()

        # List the mro, if non-trivial.
        mro = deque(inspect.getmro(object))
        if len(mro) > 2:
            hr.maybe()
            push('<dl><dt>Method resolution order:</dt>\n')
            for base in mro:
                push('<dd>%s</dd>\n' % self.classlink(base,
                                                      object.__module__))
            push('</dl>\n')

        def spill(msg, attrs, predicate):
            ok, attrs = _split_list(attrs, predicate)
            if ok:
                hr.maybe()
                push(msg)
                for name, kind, homecls, value in ok:
                    try:
                        value = getattr(object, name)
                    except Exception:
                        # Some descriptors may meet a failure in their __get__.
                        # (bug #1785)
                        push(self._docdescriptor(name, value, mod))
                    else:
                        push(self.document(value, name, mod,
                                        funcs, classes, mdict, object))
                    push('\n')
            return attrs

        def spilldescriptors(msg, attrs, predicate):
            ok, attrs = _split_list(attrs, predicate)
            if ok:
                hr.maybe()
                push(msg)
                for name, kind, homecls, value in ok:
                    push(self._docdescriptor(name, value, mod))
            return attrs

        def spilldata(msg, attrs, predicate):
            ok, attrs = _split_list(attrs, predicate)
            if ok:
                hr.maybe()
                push(msg)
                for name, kind, homecls, value in ok:
                    base = self.docother(getattr(object, name), name, mod)
                    if (hasattr(value, '__call__') or
                            inspect.isdatadescriptor(value)):
                        doc = getattr(value, "__doc__", None)
                    else:
                        doc = None
                    if doc is None:
                        push('<dl><dt>%s</dl>\n' % base)
                    else:
                        doc = self.markup(getdoc(value), self.preformat,
                                          funcs, classes, mdict)
                        doc = '<dd><tt>%s</tt>' % doc
                        push('<dl><dt>%s%s</dl>\n' % (base, doc))
                    push('\n')
            return attrs

        attrs = filter(lambda data: visiblename(data[0], obj=object),
                       classify_class_attrs(object))
        mdict = {}
        for key, kind, homecls, value in attrs:
            mdict[key] = anchor = '#' + name + '-' + key
            try:
                value = getattr(object, name)
            except Exception:
                # Some descriptors may meet a failure in their __get__.
                # (bug #1785)
                pass
            try:
                # The value may not be hashable (e.g., a data attr with
                # a dict or list value).
                mdict[value] = anchor
            except TypeError:
                pass

        while attrs:
            if mro:
                thisclass = mro.popleft()
            else:
                thisclass = attrs[0][2]
            attrs, inherited = _split_list(attrs, lambda t: t[2] is thisclass)

            if thisclass is __builtin__.object:
                attrs = inherited
                continue
            elif thisclass is object:
                tag = 'defined here'
            else:
                tag = 'inherited from %s' % self.classlink(thisclass,
                                                           object.__module__)
            tag += ':<br>\n'

            # Sort attrs by name.
            try:
                attrs.sort(key=lambda t: t[0])
            except TypeError:
                attrs.sort(lambda t1, t2: cmp(t1[0], t2[0]))    # 2.3 compat

            # Pump out the attrs, segregated by kind.
            attrs = spill('Methods %s' % tag, attrs,
                          lambda t: t[1] == 'method')
            attrs = spill('Class methods %s' % tag, attrs,
                          lambda t: t[1] == 'class method')
            attrs = spill('Static methods %s' % tag, attrs,
                          lambda t: t[1] == 'static method')
            attrs = spilldescriptors('Data descriptors %s' % tag, attrs,
                                     lambda t: t[1] == 'data descriptor')
            attrs = spilldata('Data and other attributes %s' % tag, attrs,
                              lambda t: t[1] == 'data')
            assert attrs == []
            attrs = inherited

        contents = ''.join(contents)

        if name == realname:
            title = '<a name="%s">class <strong>%s</strong></a>' % (
                name, realname)
        else:
            title = '<strong>%s</strong> = <a name="%s">class %s</a>' % (
                name, name, realname)
        if bases:
            parents = []
            for base in bases:
                parents.append(self.classlink(base, object.__module__))
            title = title + '(%s)' % join(parents, ', ')
        doc = self.markup(getdoc(object), self.preformat, funcs, classes, mdict)
        doc = doc and '<tt>%s<br>&nbsp;</tt>' % doc

        return self.section(title, '#000000', '#ffc8d8', contents, 3, doc)

    def formatvalue(self, object):
        """Format an argument default value as text."""
        return self.grey('=' + self.repr(object))

    def docroutine(self, object, name=None, mod=None,
                   funcs={}, classes={}, methods={}, cl=None):
        """Produce HTML documentation for a function or method object."""
        realname = object.__name__
        name = name or realname
        anchor = (cl and cl.__name__ or '') + '-' + name
        note = ''
        skipdocs = 0
        if inspect.ismethod(object):
            imclass = object.im_class
            if cl:
                if imclass is not cl:
                    note = ' from ' + self.classlink(imclass, mod)
            else:
                if object.im_self is not None:
                    note = ' method of %s instance' % self.classlink(
                        object.im_self.__class__, mod)
                else:
                    note = ' unbound %s method' % self.classlink(imclass,mod)
            object = object.im_func

        if name == realname:
            title = '<a name="%s"><strong>%s</strong></a>' % (anchor, realname)
        else:
            if (cl and realname in cl.__dict__ and
                cl.__dict__[realname] is object):
                reallink = '<a href="#%s">%s</a>' % (
                    cl.__name__ + '-' + realname, realname)
                skipdocs = 1
            else:
                reallink = realname
            title = '<a name="%s"><strong>%s</strong></a> = %s' % (
                anchor, name, reallink)
        if inspect.isfunction(object):
            args, varargs, varkw, defaults = inspect.getargspec(object)
            argspec = inspect.formatargspec(
                args, varargs, varkw, defaults, formatvalue=self.formatvalue)
            if realname == '<lambda>':
                title = '<strong>%s</strong> <em>lambda</em> ' % name
                argspec = argspec[1:-1] # remove parentheses
        else:
            argspec = '(...)'

        decl = title + argspec + (note and self.grey(
               '<font face="helvetica, arial">%s</font>' % note))

        if skipdocs:
            return '<dl><dt>%s</dt></dl>\n' % decl
        else:
            doc = self.markup(
                getdoc(object), self.preformat, funcs, classes, methods)
            doc = doc and '<dd><tt>%s</tt></dd>' % doc
            return '<dl><dt>%s</dt>%s</dl>\n' % (decl, doc)

    def _docdescriptor(self, name, value, mod):
        results = []
        push = results.append

        if name:
            push('<dl><dt><strong>%s</strong></dt>\n' % name)
        if value.__doc__ is not None:
            doc = self.markup(getdoc(value), self.preformat)
            push('<dd><tt>%s</tt></dd>\n' % doc)
        push('</dl>\n')

        return ''.join(results)

    def docproperty(self, object, name=None, mod=None, cl=None):
        """Produce html documentation for a property."""
        return self._docdescriptor(name, object, mod)

    def docother(self, object, name=None, mod=None, *ignored):
        """Produce HTML documentation for a data object."""
        lhs = name and '<strong>%s</strong> = ' % name or ''
        return lhs + self.repr(object)

    def docdata(self, object, name=None, mod=None, cl=None):
        """Produce html documentation for a data descriptor."""
        return self._docdescriptor(name, object, mod)

    def index(self, dir, shadowed=None):
        """Generate an HTML index for a directory of modules."""
        modpkgs = []
        if shadowed is None: shadowed = {}
        for importer, name, ispkg in pkgutil.iter_modules([dir]):
            modpkgs.append((name, '', ispkg, name in shadowed))
            shadowed[name] = 1

        modpkgs.sort()
        contents = self.multicolumn(modpkgs, self.modpkglink)
        return self.bigsection(dir, '#ffffff', '#ee77aa', contents)

# -------------------------------------------- text documentation generator

class TextRepr(Repr):
    """Class for safely making a text representation of a Python object."""
    def __init__(self):
        Repr.__init__(self)
        self.maxlist = self.maxtuple = 20
        self.maxdict = 10
        self.maxstring = self.maxother = 100

    def repr1(self, x, level):
        if hasattr(type(x), '__name__'):
            methodname = 'repr_' + join(split(type(x).__name__), '_')
            if hasattr(self, methodname):
                return getattr(self, methodname)(x, level)
        return cram(stripid(repr(x)), self.maxother)

    def repr_string(self, x, level):
        test = cram(x, self.maxstring)
        testrepr = repr(test)
        if '\\' in test and '\\' not in replace(testrepr, r'\\', ''):
            # Backslashes are only literal in the string and are never
            # needed to make any special characters, so show a raw string.
            return 'r' + testrepr[0] + test + testrepr[0]
        return testrepr

    repr_str = repr_string

    def repr_instance(self, x, level):
        try:
            return cram(stripid(repr(x)), self.maxstring)
        except:
            return '<%s instance>' % x.__class__.__name__

class TextDoc(Doc):
    """Formatter class for text documentation."""

    # ------------------------------------------- text formatting utilities

    _repr_instance = TextRepr()
    repr = _repr_instance.repr

    def bold(self, text):
        """Format a string in bold by overstriking."""
        return join(map(lambda ch: ch + '\b' + ch, text), '')

    def indent(self, text, prefix='    '):
        """Indent text by prepending a given prefix to each line."""
        if not text: return ''
        lines = split(text, '\n')
        lines = map(lambda line, prefix=prefix: prefix + line, lines)
        if lines: lines[-1] = rstrip(lines[-1])
        return join(lines, '\n')

    def section(self, title, contents):
        """Format a section with a given heading."""
        return self.bold(title) + '\n' + rstrip(self.indent(contents)) + '\n\n'

    # ---------------------------------------------- type-specific routines

    def formattree(self, tree, modname, parent=None, prefix=''):
        """Render in text a class tree as returned by inspect.getclasstree()."""
        result = ''
        for entry in tree:
            if type(entry) is type(()):
                c, bases = entry
                result = result + prefix + classname(c, modname)
                if bases and bases != (parent,):
                    parents = map(lambda c, m=modname: classname(c, m), bases)
                    result = result + '(%s)' % join(parents, ', ')
                result = result + '\n'
            elif type(entry) is type([]):
                result = result + self.formattree(
                    entry, modname, c, prefix + '    ')
        return result

    def docmodule(self, object, name=None, mod=None):
        """Produce text documentation for a given module object."""
        name = object.__name__ # ignore the passed-in name
        synop, desc = splitdoc(getdoc(object))
        result = self.section('NAME', name + (synop and ' - ' + synop))

        try:
            all = object.__all__
        except AttributeError:
            all = None

        try:
            file = inspect.getabsfile(object)
        except TypeError:
            file = '(built-in)'
        result = result + self.section('FILE', file)

        docloc = self.getdocloc(object)
        if docloc is not None:
            result = result + self.section('MODULE DOCS', docloc)

        if desc:
            result = result + self.section('DESCRIPTION', desc)

        classes = []
        for key, value in inspect.getmembers(object, inspect.isclass):
            # if __all__ exists, believe it.  Otherwise use old heuristic.
            if (all is not None
                or (inspect.getmodule(value) or object) is object):
                if visiblename(key, all, object):
                    classes.append((key, value))
        funcs = []
        for key, value in inspect.getmembers(object, inspect.isroutine):
            # if __all__ exists, believe it.  Otherwise use old heuristic.
            if (all is not None or
                inspect.isbuiltin(value) or inspect.getmodule(value) is object):
                if visiblename(key, all, object):
                    funcs.append((key, value))
        data = []
        for key, value in inspect.getmembers(object, isdata):
            if visiblename(key, all, object):
                data.append((key, value))

        modpkgs = []
        modpkgs_names = set()
        if hasattr(object, '__path__'):
            for importer, modname, ispkg in pkgutil.iter_modules(object.__path__):
                modpkgs_names.add(modname)
                if ispkg:
                    modpkgs.append(modname + ' (package)')
                else:
                    modpkgs.append(modname)

            modpkgs.sort()
            result = result + self.section(
                'PACKAGE CONTENTS', join(modpkgs, '\n'))

        # Detect submodules as sometimes created by C extensions
        submodules = []
        for key, value in inspect.getmembers(object, inspect.ismodule):
            if value.__name__.startswith(name + '.') and key not in modpkgs_names:
                submodules.append(key)
        if submodules:
            submodules.sort()
            result = result + self.section(
                'SUBMODULES', join(submodules, '\n'))

        if classes:
            classlist = map(lambda key_value: key_value[1], classes)
            contents = [self.formattree(
                inspect.getclasstree(classlist, 1), name)]
            for key, value in classes:
                contents.append(self.document(value, key, name))
            result = result + self.section('CLASSES', join(contents, '\n'))

        if funcs:
            contents = []
            for key, value in funcs:
                contents.append(self.document(value, key, name))
            result = result + self.section('FUNCTIONS', join(contents, '\n'))

        if data:
            contents = []
            for key, value in data:
                contents.append(self.docother(value, key, name, maxlen=70))
            result = result + self.section('DATA', join(contents, '\n'))

        if hasattr(object, '__version__'):
            version = _binstr(object.__version__)
            if version[:11] == '$' + 'Revision: ' and version[-1:] == '$':
                version = strip(version[11:-1])
            result = result + self.section('VERSION', version)
        if hasattr(object, '__date__'):
            result = result + self.section('DATE', _binstr(object.__date__))
        if hasattr(object, '__author__'):
            result = result + self.section('AUTHOR', _binstr(object.__author__))
        if hasattr(object, '__credits__'):
            result = result + self.section('CREDITS', _binstr(object.__credits__))
        return result

    def docclass(self, object, name=None, mod=None, *ignored):
        """Produce text documentation for a given class object."""
        realname = object.__name__
        name = name or realname
        bases = object.__bases__

        def makename(c, m=object.__module__):
            return classname(c, m)

        if name == realname:
            title = 'class ' + self.bold(realname)
        else:
            title = self.bold(name) + ' = class ' + realname
        if bases:
            parents = map(makename, bases)
            title = title + '(%s)' % join(parents, ', ')

        doc = getdoc(object)
        contents = doc and [doc + '\n'] or []
        push = contents.append

        # List the mro, if non-trivial.
        mro = deque(inspect.getmro(object))
        if len(mro) > 2:
            push("Method resolution order:")
            for base in mro:
                push('    ' + makename(base))
            push('')

        # Cute little class to pump out a horizontal rule between sections.
        class HorizontalRule:
            def __init__(self):
                self.needone = 0
            def maybe(self):
                if self.needone:
                    push('-' * 70)
                self.needone = 1
        hr = HorizontalRule()

        def spill(msg, attrs, predicate):
            ok, attrs = _split_list(attrs, predicate)
            if ok:
                hr.maybe()
                push(msg)
                for name, kind, homecls, value in ok:
                    try:
                        value = getattr(object, name)
                    except Exception:
                        # Some descriptors may meet a failure in their __get__.
                        # (bug #1785)
                        push(self._docdescriptor(name, value, mod))
                    else:
                        push(self.document(value,
                                        name, mod, object))
            return attrs

        def spilldescriptors(msg, attrs, predicate):
            ok, attrs = _split_list(attrs, predicate)
            if ok:
                hr.maybe()
                push(msg)
                for name, kind, homecls, value in ok:
                    push(self._docdescriptor(name, value, mod))
            return attrs

        def spilldata(msg, attrs, predicate):
            ok, attrs = _split_list(attrs, predicate)
            if ok:
                hr.maybe()
                push(msg)
                for name, kind, homecls, value in ok:
                    if (hasattr(value, '__call__') or
                            inspect.isdatadescriptor(value)):
                        doc = getdoc(value)
                    else:
                        doc = None
                    push(self.docother(getattr(object, name),
                                       name, mod, maxlen=70, doc=doc) + '\n')
            return attrs

        attrs = filter(lambda data: visiblename(data[0], obj=object),
                       classify_class_attrs(object))
        while attrs:
            if mro:
                thisclass = mro.popleft()
            else:
                thisclass = attrs[0][2]
            attrs, inherited = _split_list(attrs, lambda t: t[2] is thisclass)

            if thisclass is __builtin__.object:
                attrs = inherited
                continue
            elif thisclass is object:
                tag = "defined here"
            else:
                tag = "inherited from %s" % classname(thisclass,
                                                      object.__module__)

            # Sort attrs by name.
            attrs.sort()

            # Pump out the attrs, segregated by kind.
            attrs = spill("Methods %s:\n" % tag, attrs,
                          lambda t: t[1] == 'method')
            attrs = spill("Class methods %s:\n" % tag, attrs,
                          lambda t: t[1] == 'class method')
            attrs = spill("Static methods %s:\n" % tag, attrs,
                          lambda t: t[1] == 'static method')
            attrs = spilldescriptors("Data descriptors %s:\n" % tag, attrs,
                                     lambda t: t[1] == 'data descriptor')
            attrs = spilldata("Data and other attributes %s:\n" % tag, attrs,
                              lambda t: t[1] == 'data')
            assert attrs == []
            attrs = inherited

        contents = '\n'.join(contents)
        if not contents:
            return title + '\n'
        return title + '\n' + self.indent(rstrip(contents), ' |  ') + '\n'

    def formatvalue(self, object):
        """Format an argument default value as text."""
        return '=' + self.repr(object)

    def docroutine(self, object, name=None, mod=None, cl=None):
        """Produce text documentation for a function or method object."""
        realname = object.__name__
        name = name or realname
        note = ''
        skipdocs = 0
        if inspect.ismethod(object):
            imclass = object.im_class
            if cl:
                if imclass is not cl:
                    note = ' from ' + classname(imclass, mod)
            else:
                if object.im_self is not None:
                    note = ' method of %s instance' % classname(
                        object.im_self.__class__, mod)
                else:
                    note = ' unbound %s method' % classname(imclass,mod)
            object = object.im_func

        if name == realname:
            title = self.bold(realname)
        else:
            if (cl and realname in cl.__dict__ and
                cl.__dict__[realname] is object):
                skipdocs = 1
            title = self.bold(name) + ' = ' + realname
        if inspect.isfunction(object):
            args, varargs, varkw, defaults = inspect.getargspec(object)
            argspec = inspect.formatargspec(
                args, varargs, varkw, defaults, formatvalue=self.formatvalue)
            if realname == '<lambda>':
                title = self.bold(name) + ' lambda '
                argspec = argspec[1:-1] # remove parentheses
        else:
            argspec = '(...)'
        decl = title + argspec + note

        if skipdocs:
            return decl + '\n'
        else:
            doc = getdoc(object) or ''
            return decl + '\n' + (doc and rstrip(self.indent(doc)) + '\n')

    def _docdescriptor(self, name, value, mod):
        results = []
        push = results.append

        if name:
            push(self.bold(name))
            push('\n')
        doc = getdoc(value) or ''
        if doc:
            push(self.indent(doc))
            push('\n')
        return ''.join(results)

    def docproperty(self, object, name=None, mod=None, cl=None):
        """Produce text documentation for a property."""
        return self._docdescriptor(name, object, mod)

    def docdata(self, object, name=None, mod=None, cl=None):
        """Produce text documentation for a data descriptor."""
        return self._docdescriptor(name, object, mod)

    def docother(self, object, name=None, mod=None, parent=None, maxlen=None, doc=None):
        """Produce text documentation for a data object."""
        repr = self.repr(object)
        if maxlen:
            line = (name and name + ' = ' or '') + repr
            chop = maxlen - len(line)
            if chop < 0: repr = repr[:chop] + '...'
        line = (name and self.bold(name) + ' = ' or '') + repr
        if doc is not None:
            line += '\n' + self.indent(str(doc))
        return line

# --------------------------------------------------------- user interfaces

def pager(text):
    """The first time this is called, determine what kind of pager to use."""
    global pager
    pager = getpager()
    pager(text)

def getpager():
    """Decide what method to use for paging through text."""
    if type(sys.stdout) is not types.FileType:
        return plainpager
    if not hasattr(sys.stdin, "isatty"):
        return plainpager
    if not sys.stdin.isatty() or not sys.stdout.isatty():
        return plainpager
    if 'PAGER' in os.environ:
        if sys.platform == 'win32': # pipes completely broken in Windows
            return lambda text: tempfilepager(plain(text), os.environ['PAGER'])
        elif os.environ.get('TERM') in ('dumb', 'emacs'):
            return lambda text: pipepager(plain(text), os.environ['PAGER'])
        else:
            return lambda text: pipepager(text, os.environ['PAGER'])
    if os.environ.get('TERM') in ('dumb', 'emacs'):
        return plainpager
    if sys.platform == 'win32' or sys.platform.startswith('os2'):
        return lambda text: tempfilepager(plain(text), 'more <')
    if hasattr(os, 'system') and os.system('(less) 2>/dev/null') == 0:
        return lambda text: pipepager(text, 'less')

    import tempfile
    (fd, filename) = tempfile.mkstemp()
    os.close(fd)
    try:
        if hasattr(os, 'system') and os.system('more "%s"' % filename) == 0:
            return lambda text: pipepager(text, 'more')
        else:
            return ttypager
    finally:
        os.unlink(filename)

def plain(text):
    """Remove boldface formatting from text."""
    return re.sub('.\b', '', text)

def pipepager(text, cmd):
    """Page through text by feeding it to another program."""
    pipe = os.popen(cmd, 'w')
    try:
        pipe.write(_encode(text))
        pipe.close()
    except IOError:
        pass # Ignore broken pipes caused by quitting the pager program.

def tempfilepager(text, cmd):
    """Page through text by invoking a program on a temporary file."""
    import tempfile
    filename = tempfile.mktemp()
    file = open(filename, 'w')
    file.write(_encode(text))
    file.close()
    try:
        os.system(cmd + ' "' + filename + '"')
    finally:
        os.unlink(filename)

def ttypager(text):
    """Page through text on a text terminal."""
    lines = plain(_encode(plain(text), getattr(sys.stdout, 'encoding', _encoding))).split('\n')
    try:
        import tty
        fd = sys.stdin.fileno()
        old = tty.tcgetattr(fd)
        tty.setcbreak(fd)
        getchar = lambda: sys.stdin.read(1)
    except (ImportError, AttributeError):
        tty = None
        getchar = lambda: sys.stdin.readline()[:-1][:1]

    try:
        try:
            h = int(os.environ.get('LINES', 0))
        except ValueError:
            h = 0
        if h <= 1:
            h = 25
        r = inc = h - 1
        sys.stdout.write(join(lines[:inc], '\n') + '\n')
        while lines[r:]:
            sys.stdout.write('-- more --')
            sys.stdout.flush()
            c = getchar()

            if c in ('q', 'Q'):
                sys.stdout.write('\r          \r')
                break
            elif c in ('\r', '\n'):
                sys.stdout.write('\r          \r' + lines[r] + '\n')
                r = r + 1
                continue
            if c in ('b', 'B', '\x1b'):
                r = r - inc - inc
                if r < 0: r = 0
            sys.stdout.write('\n' + join(lines[r:r+inc], '\n') + '\n')
            r = r + inc

    finally:
        if tty:
            tty.tcsetattr(fd, tty.TCSAFLUSH, old)

def plainpager(text):
    """Simply print unformatted text.  This is the ultimate fallback."""
    sys.stdout.write(_encode(plain(text), getattr(sys.stdout, 'encoding', _encoding)))

def describe(thing):
    """Produce a short description of the given thing."""
    if inspect.ismodule(thing):
        if thing.__name__ in sys.builtin_module_names:
            return 'built-in module ' + thing.__name__
        if hasattr(thing, '__path__'):
            return 'package ' + thing.__name__
        else:
            return 'module ' + thing.__name__
    if inspect.isbuiltin(thing):
        return 'built-in function ' + thing.__name__
    if inspect.isgetsetdescriptor(thing):
        return 'getset descriptor %s.%s.%s' % (
            thing.__objclass__.__module__, thing.__objclass__.__name__,
            thing.__name__)
    if inspect.ismemberdescriptor(thing):
        return 'member descriptor %s.%s.%s' % (
            thing.__objclass__.__module__, thing.__objclass__.__name__,
            thing.__name__)
    if inspect.isclass(thing):
        return 'class ' + thing.__name__
    if inspect.isfunction(thing):
        return 'function ' + thing.__name__
    if inspect.ismethod(thing):
        return 'method ' + thing.__name__
    if type(thing) is types.InstanceType:
        return 'instance of ' + thing.__class__.__name__
    return type(thing).__name__

def locate(path, forceload=0):
    """Locate an object by name or dotted path, importing as necessary."""
    parts = [part for part in split(path, '.') if part]
    module, n = None, 0
    while n < len(parts):
        nextmodule = safeimport(join(parts[:n+1], '.'), forceload)
        if nextmodule: module, n = nextmodule, n + 1
        else: break
    if module:
        object = module
    else:
        object = __builtin__
    for part in parts[n:]:
        try:
            object = getattr(object, part)
        except AttributeError:
            return None
    return object

# --------------------------------------- interactive interpreter interface

text = TextDoc()
html = HTMLDoc()

class _OldStyleClass: pass
_OLD_INSTANCE_TYPE = type(_OldStyleClass())

def resolve(thing, forceload=0):
    """Given an object or a path to an object, get the object and its name."""
    if isinstance(thing, str):
        object = locate(thing, forceload)
        if object is None:
            raise ImportError, 'no Python documentation found for %r' % thing
        return object, thing
    else:
        name = getattr(thing, '__name__', None)
        return thing, name if isinstance(name, str) else None

def render_doc(thing, title='Python Library Documentation: %s', forceload=0):
    """Render text documentation, given an object or a path to an object."""
    object, name = resolve(thing, forceload)
    desc = describe(object)
    module = inspect.getmodule(object)
    if name and '.' in name:
        desc += ' in ' + name[:name.rfind('.')]
    elif module and module is not object:
        desc += ' in module ' + module.__name__
    if type(object) is _OLD_INSTANCE_TYPE:
        # If the passed object is an instance of an old-style class,
        # document its available methods instead of its value.
        object = object.__class__
    elif not (inspect.ismodule(object) or
              inspect.isclass(object) or
              inspect.isroutine(object) or
              inspect.isgetsetdescriptor(object) or
              inspect.ismemberdescriptor(object) or
              isinstance(object, property)):
        # If the passed object is a piece of data or an instance,
        # document its available methods instead of its value.
        object = type(object)
        desc += ' object'
    return title % desc + '\n\n' + text.document(object, name)

def doc(thing, title='Python Library Documentation: %s', forceload=0):
    """Display text documentation, given an object or a path to an object."""
    try:
        pager(render_doc(thing, title, forceload))
    except (ImportError, ErrorDuringImport), value:
        print value

def writedoc(thing, forceload=0):
    """Write HTML documentation to a file in the current directory."""
    try:
        object, name = resolve(thing, forceload)
        page = html.page(describe(object), html.document(object, name))
        file = open(name + '.html', 'w')
        file.write(page)
        file.close()
        print 'wrote', name + '.html'
    except (ImportError, ErrorDuringImport), value:
        print value

def writedocs(dir, pkgpath='', done=None):
    """Write out HTML documentation for all modules in a directory tree."""
    if done is None: done = {}
    for importer, modname, ispkg in pkgutil.walk_packages([dir], pkgpath):
        writedoc(modname)
    return

class Helper:

    # These dictionaries map a topic name to either an alias, or a tuple
    # (label, seealso-items).  The "label" is the label of the corresponding
    # section in the .rst file under Doc/ and an index into the dictionary
    # in pydoc_data/topics.py.
    #
    # CAUTION: if you change one of these dictionaries, be sure to adapt the
    #          list of needed labels in Doc/tools/pyspecific.py and
    #          regenerate the pydoc_data/topics.py file by running
    #              make pydoc-topics
    #          in Doc/ and copying the output file into the Lib/ directory.

    keywords = {
        'and': 'BOOLEAN',
        'as': 'with',
        'assert': ('assert', ''),
        'break': ('break', 'while for'),
        'class': ('class', 'CLASSES SPECIALMETHODS'),
        'continue': ('continue', 'while for'),
        'def': ('function', ''),
        'del': ('del', 'BASICMETHODS'),
        'elif': 'if',
        'else': ('else', 'while for'),
        'except': 'try',
        'exec': ('exec', ''),
        'finally': 'try',
        'for': ('for', 'break continue while'),
        'from': 'import',
        'global': ('global', 'NAMESPACES'),
        'if': ('if', 'TRUTHVALUE'),
        'import': ('import', 'MODULES'),
        'in': ('in', 'SEQUENCEMETHODS2'),
        'is': 'COMPARISON',
        'lambda': ('lambda', 'FUNCTIONS'),
        'not': 'BOOLEAN',
        'or': 'BOOLEAN',
        'pass': ('pass', ''),
        'print': ('print', ''),
        'raise': ('raise', 'EXCEPTIONS'),
        'return': ('return', 'FUNCTIONS'),
        'try': ('try', 'EXCEPTIONS'),
        'while': ('while', 'break continue if TRUTHVALUE'),
        'with': ('with', 'CONTEXTMANAGERS EXCEPTIONS yield'),
        'yield': ('yield', ''),
    }
    # Either add symbols to this dictionary or to the symbols dictionary
    # directly: Whichever is easier. They are merged later.
    _strprefixes = tuple(p + q for p in ('b', 'r', 'u') for q in ("'", '"'))
    _symbols_inverse = {
        'STRINGS' : ("'", "'''", '"""', '"') + _strprefixes,
        'OPERATORS' : ('+', '-', '*', '**', '/', '//', '%', '<<', '>>', '&',
                       '|', '^', '~', '<', '>', '<=', '>=', '==', '!=', '<>'),
        'COMPARISON' : ('<', '>', '<=', '>=', '==', '!=', '<>'),
        'UNARY' : ('-', '~'),
        'AUGMENTEDASSIGNMENT' : ('+=', '-=', '*=', '/=', '%=', '&=', '|=',
                                '^=', '<<=', '>>=', '**=', '//='),
        'BITWISE' : ('<<', '>>', '&', '|', '^', '~'),
        'COMPLEX' : ('j', 'J')
    }
    symbols = {
        '%': 'OPERATORS FORMATTING',
        '**': 'POWER',
        ',': 'TUPLES LISTS FUNCTIONS',
        '.': 'ATTRIBUTES FLOAT MODULES OBJECTS',
        '...': 'ELLIPSIS',
        ':': 'SLICINGS DICTIONARYLITERALS',
        '@': 'def class',
        '\\': 'STRINGS',
        '_': 'PRIVATENAMES',
        '__': 'PRIVATENAMES SPECIALMETHODS',
        '`': 'BACKQUOTES',
        '(': 'TUPLES FUNCTIONS CALLS',
        ')': 'TUPLES FUNCTIONS CALLS',
        '[': 'LISTS SUBSCRIPTS SLICINGS',
        ']': 'LISTS SUBSCRIPTS SLICINGS'
    }
    for topic, symbols_ in _symbols_inverse.iteritems():
        for symbol in symbols_:
            topics = symbols.get(symbol, topic)
            if topic not in topics:
                topics = topics + ' ' + topic
            symbols[symbol] = topics

    topics = {
        'TYPES': ('types', 'STRINGS UNICODE NUMBERS SEQUENCES MAPPINGS '
                  'FUNCTIONS CLASSES MODULES FILES inspect'),
        'STRINGS': ('strings', 'str UNICODE SEQUENCES STRINGMETHODS FORMATTING '
                    'TYPES'),
        'STRINGMETHODS': ('string-methods', 'STRINGS FORMATTING'),
        'FORMATTING': ('formatstrings', 'OPERATORS'),
        'UNICODE': ('strings', 'encodings unicode SEQUENCES STRINGMETHODS '
                    'FORMATTING TYPES'),
        'NUMBERS': ('numbers', 'INTEGER FLOAT COMPLEX TYPES'),
        'INTEGER': ('integers', 'int range'),
        'FLOAT': ('floating', 'float math'),
        'COMPLEX': ('imaginary', 'complex cmath'),
        'SEQUENCES': ('typesseq', 'STRINGMETHODS FORMATTING xrange LISTS'),
        'MAPPINGS': 'DICTIONARIES',
        'FUNCTIONS': ('typesfunctions', 'def TYPES'),
        'METHODS': ('typesmethods', 'class def CLASSES TYPES'),
        'CODEOBJECTS': ('bltin-code-objects', 'compile FUNCTIONS TYPES'),
        'TYPEOBJECTS': ('bltin-type-objects', 'types TYPES'),
        'FRAMEOBJECTS': 'TYPES',
        'TRACEBACKS': 'TYPES',
        'NONE': ('bltin-null-object', ''),
        'ELLIPSIS': ('bltin-ellipsis-object', 'SLICINGS'),
        'FILES': ('bltin-file-objects', ''),
        'SPECIALATTRIBUTES': ('specialattrs', ''),
        'CLASSES': ('types', 'class SPECIALMETHODS PRIVATENAMES'),
        'MODULES': ('typesmodules', 'import'),
        'PACKAGES': 'import',
        'EXPRESSIONS': ('operator-summary', 'lambda or and not in is BOOLEAN '
                        'COMPARISON BITWISE SHIFTING BINARY FORMATTING POWER '
                        'UNARY ATTRIBUTES SUBSCRIPTS SLICINGS CALLS TUPLES '
                        'LISTS DICTIONARIES BACKQUOTES'),
        'OPERATORS': 'EXPRESSIONS',
        'PRECEDENCE': 'EXPRESSIONS',
        'OBJECTS': ('objects', 'TYPES'),
        'SPECIALMETHODS': ('specialnames', 'BASICMETHODS ATTRIBUTEMETHODS '
                           'CALLABLEMETHODS SEQUENCEMETHODS1 MAPPINGMETHODS '
                           'SEQUENCEMETHODS2 NUMBERMETHODS CLASSES'),
        'BASICMETHODS': ('customization', 'cmp hash repr str SPECIALMETHODS'),
        'ATTRIBUTEMETHODS': ('attribute-access', 'ATTRIBUTES SPECIALMETHODS'),
        'CALLABLEMETHODS': ('callable-types', 'CALLS SPECIALMETHODS'),
        'SEQUENCEMETHODS1': ('sequence-types', 'SEQUENCES SEQUENCEMETHODS2 '
                             'SPECIALMETHODS'),
        'SEQUENCEMETHODS2': ('sequence-methods', 'SEQUENCES SEQUENCEMETHODS1 '
                             'SPECIALMETHODS'),
        'MAPPINGMETHODS': ('sequence-types', 'MAPPINGS SPECIALMETHODS'),
        'NUMBERMETHODS': ('numeric-types', 'NUMBERS AUGMENTEDASSIGNMENT '
                          'SPECIALMETHODS'),
        'EXECUTION': ('execmodel', 'NAMESPACES DYNAMICFEATURES EXCEPTIONS'),
        'NAMESPACES': ('naming', 'global ASSIGNMENT DELETION DYNAMICFEATURES'),
        'DYNAMICFEATURES': ('dynamic-features', ''),
        'SCOPING': 'NAMESPACES',
        'FRAMES': 'NAMESPACES',
        'EXCEPTIONS': ('exceptions', 'try except finally raise'),
        'COERCIONS': ('coercion-rules','CONVERSIONS'),
        'CONVERSIONS': ('conversions', 'COERCIONS'),
        'IDENTIFIERS': ('identifiers', 'keywords SPECIALIDENTIFIERS'),
        'SPECIALIDENTIFIERS': ('id-classes', ''),
        'PRIVATENAMES': ('atom-identifiers', ''),
        'LITERALS': ('atom-literals', 'STRINGS BACKQUOTES NUMBERS '
                     'TUPLELITERALS LISTLITERALS DICTIONARYLITERALS'),
        'TUPLES': 'SEQUENCES',
        'TUPLELITERALS': ('exprlists', 'TUPLES LITERALS'),
        'LISTS': ('typesseq-mutable', 'LISTLITERALS'),
        'LISTLITERALS': ('lists', 'LISTS LITERALS'),
        'DICTIONARIES': ('typesmapping', 'DICTIONARYLITERALS'),
        'DICTIONARYLITERALS': ('dict', 'DICTIONARIES LITERALS'),
        'BACKQUOTES': ('string-conversions', 'repr str STRINGS LITERALS'),
        'ATTRIBUTES': ('attribute-references', 'getattr hasattr setattr '
                       'ATTRIBUTEMETHODS'),
        'SUBSCRIPTS': ('subscriptions', 'SEQUENCEMETHODS1'),
        'SLICINGS': ('slicings', 'SEQUENCEMETHODS2'),
        'CALLS': ('calls', 'EXPRESSIONS'),
        'POWER': ('power', 'EXPRESSIONS'),
        'UNARY': ('unary', 'EXPRESSIONS'),
        'BINARY': ('binary', 'EXPRESSIONS'),
        'SHIFTING': ('shifting', 'EXPRESSIONS'),
        'BITWISE': ('bitwise', 'EXPRESSIONS'),
        'COMPARISON': ('comparisons', 'EXPRESSIONS BASICMETHODS'),
        'BOOLEAN': ('booleans', 'EXPRESSIONS TRUTHVALUE'),
        'ASSERTION': 'assert',
        'ASSIGNMENT': ('assignment', 'AUGMENTEDASSIGNMENT'),
        'AUGMENTEDASSIGNMENT': ('augassign', 'NUMBERMETHODS'),
        'DELETION': 'del',
        'PRINTING': 'print',
        'RETURNING': 'return',
        'IMPORTING': 'import',
        'CONDITIONAL': 'if',
        'LOOPING': ('compound', 'for while break continue'),
        'TRUTHVALUE': ('truth', 'if while and or not BASICMETHODS'),
        'DEBUGGING': ('debugger', 'pdb'),
        'CONTEXTMANAGERS': ('context-managers', 'with'),
    }

    def __init__(self, input=None, output=None):
        self._input = input
        self._output = output

    input  = property(lambda self: self._input or sys.stdin)
    output = property(lambda self: self._output or sys.stdout)

    def __repr__(self):
        if inspect.stack()[1][3] == '?':
            self()
            return ''
        return '<pydoc.Helper instance>'

    _GoInteractive = object()
    def __call__(self, request=_GoInteractive):
        if request is not self._GoInteractive:
            self.help(request)
        else:
            self.intro()
            self.interact()
            self.output.write('''
You are now leaving help and returning to the Python interpreter.
If you want to ask for help on a particular object directly from the
interpreter, you can type "help(object)".  Executing "help('string')"
has the same effect as typing a particular string at the help> prompt.
''')

    def interact(self):
        self.output.write('\n')
        while True:
            try:
                request = self.getline('help> ')
                if not request: break
            except (KeyboardInterrupt, EOFError):
                break
            request = strip(request)
            # Make sure significant trailing quotation marks of literals don't
            # get deleted while cleaning input
            if (len(request) > 2 and request[0] == request[-1] in ("'", '"')
                    and request[0] not in request[1:-1]):
                request = request[1:-1]
            if lower(request) in ('q', 'quit'): break
            self.help(request)

    def getline(self, prompt):
        """Read one line, using raw_input when available."""
        if self.input is sys.stdin:
            return raw_input(prompt)
        else:
            self.output.write(prompt)
            self.output.flush()
            return self.input.readline()

    def help(self, request):
        if type(request) is type(''):
            request = request.strip()
            if request == 'help': self.intro()
            elif request == 'keywords': self.listkeywords()
            elif request == 'symbols': self.listsymbols()
            elif request == 'topics': self.listtopics()
            elif request == 'modules': self.listmodules()
            elif request[:8] == 'modules ':
                self.listmodules(split(request)[1])
            elif request in self.symbols: self.showsymbol(request)
            elif request in self.keywords: self.showtopic(request)
            elif request in self.topics: self.showtopic(request)
            elif request: doc(request, 'Help on %s:')
        elif isinstance(request, Helper): self()
        else: doc(request, 'Help on %s:')
        self.output.write('\n')

    def intro(self):
        self.output.write('''
Welcome to Python %s!  This is the online help utility.

If this is your first time using Python, you should definitely check out
the tutorial on the Internet at http://docs.python.org/%s/tutorial/.

Enter the name of any module, keyword, or topic to get help on writing
Python programs and using Python modules.  To quit this help utility and
return to the interpreter, just type "quit".

To get a list of available modules, keywords, or topics, type "modules",
"keywords", or "topics".  Each module also comes with a one-line summary
of what it does; to list the modules whose summaries contain a given word
such as "spam", type "modules spam".
''' % tuple([sys.version[:3]]*2))

    def list(self, items, columns=4, width=80):
        items = items[:]
        items.sort()
        colw = width / columns
        rows = (len(items) + columns - 1) / columns
        for row in range(rows):
            for col in range(columns):
                i = col * rows + row
                if i < len(items):
                    self.output.write(items[i])
                    if col < columns - 1:
                        self.output.write(' ' + ' ' * (colw-1 - len(items[i])))
            self.output.write('\n')

    def listkeywords(self):
        self.output.write('''
Here is a list of the Python keywords.  Enter any keyword to get more help.

''')
        self.list(self.keywords.keys())

    def listsymbols(self):
        self.output.write('''
Here is a list of the punctuation symbols which Python assigns special meaning
to. Enter any symbol to get more help.

''')
        self.list(self.symbols.keys())

    def listtopics(self):
        self.output.write('''
Here is a list of available topics.  Enter any topic name to get more help.

''')
        self.list(self.topics.keys())

    def showtopic(self, topic, more_xrefs=''):
        try:
            import pydoc_data.topics
        except ImportError:
            self.output.write('''
Sorry, topic and keyword documentation is not available because the
module "pydoc_data.topics" could not be found.
''')
            return
        target = self.topics.get(topic, self.keywords.get(topic))
        if not target:
            self.output.write('no documentation found for %s\n' % repr(topic))
            return
        if type(target) is type(''):
            return self.showtopic(target, more_xrefs)

        label, xrefs = target
        try:
            doc = pydoc_data.topics.topics[label]
        except KeyError:
            self.output.write('no documentation found for %s\n' % repr(topic))
            return
        pager(strip(doc) + '\n')
        if more_xrefs:
            xrefs = (xrefs or '') + ' ' + more_xrefs
        if xrefs:
            import StringIO, formatter
            buffer = StringIO.StringIO()
            formatter.DumbWriter(buffer).send_flowing_data(
                'Related help topics: ' + join(split(xrefs), ', ') + '\n')
            self.output.write('\n%s\n' % buffer.getvalue())

    def showsymbol(self, symbol):
        target = self.symbols[symbol]
        topic, _, xrefs = target.partition(' ')
        self.showtopic(topic, xrefs)

    def listmodules(self, key=''):
        if key:
            self.output.write('''
Here is a list of matching modules.  Enter any module name to get more help.

''')
            apropos(key)
        else:
            self.output.write('''
Please wait a moment while I gather a list of all available modules...

''')
            modules = {}
            def callback(path, modname, desc, modules=modules):
                if modname and modname[-9:] == '.__init__':
                    modname = modname[:-9] + ' (package)'
                if find(modname, '.') < 0:
                    modules[modname] = 1
            def onerror(modname):
                callback(None, modname, None)
            ModuleScanner().run(callback, onerror=onerror)
            self.list(modules.keys())
            self.output.write('''
Enter any module name to get more help.  Or, type "modules spam" to search
for modules whose descriptions contain the word "spam".
''')

help = Helper()

class Scanner:
    """A generic tree iterator."""
    def __init__(self, roots, children, descendp):
        self.roots = roots[:]
        self.state = []
        self.children = children
        self.descendp = descendp

    def next(self):
        if not self.state:
            if not self.roots:
                return None
            root = self.roots.pop(0)
            self.state = [(root, self.children(root))]
        node, children = self.state[-1]
        if not children:
            self.state.pop()
            return self.next()
        child = children.pop(0)
        if self.descendp(child):
            self.state.append((child, self.children(child)))
        return child


class ModuleScanner:
    """An interruptible scanner that searches module synopses."""

    def run(self, callback, key=None, completer=None, onerror=None):
        if key: key = lower(key)
        self.quit = False
        seen = {}

        for modname in sys.builtin_module_names:
            if modname != '__main__':
                seen[modname] = 1
                if key is None:
                    callback(None, modname, '')
                else:
                    desc = split(__import__(modname).__doc__ or '', '\n')[0]
                    if find(lower(modname + ' - ' + desc), key) >= 0:
                        callback(None, modname, desc)

        for importer, modname, ispkg in pkgutil.walk_packages(onerror=onerror):
            if self.quit:
                break
            if key is None:
                callback(None, modname, '')
            else:
                loader = importer.find_module(modname)
                if hasattr(loader,'get_source'):
                    import StringIO
                    desc = source_synopsis(
                        StringIO.StringIO(loader.get_source(modname))
                    ) or ''
                    if hasattr(loader,'get_filename'):
                        path = loader.get_filename(modname)
                    else:
                        path = None
                else:
                    module = loader.load_module(modname)
                    desc = module.__doc__.splitlines()[0] if module.__doc__ else ''
                    path = getattr(module,'__file__',None)
                if find(lower(modname + ' - ' + desc), key) >= 0:
                    callback(path, modname, desc)

        if completer:
            completer()

def apropos(key):
    """Print all the one-line module summaries that contain a substring."""
    def callback(path, modname, desc):
        if modname[-9:] == '.__init__':
            modname = modname[:-9] + ' (package)'
        print modname, desc and '- ' + desc
    def onerror(modname):
        pass
    with warnings.catch_warnings():
        warnings.filterwarnings('ignore') # ignore problems during import
        ModuleScanner().run(callback, key, onerror=onerror)

# --------------------------------------------------- web browser interface

def serve(port, callback=None, completer=None):
    import BaseHTTPServer, mimetools, select

    # Patch up mimetools.Message so it doesn't break if rfc822 is reloaded.
    class Message(mimetools.Message):
        def __init__(self, fp, seekable=1):
            Message = self.__class__
            Message.__bases__[0].__bases__[0].__init__(self, fp, seekable)
            self.encodingheader = self.getheader('content-transfer-encoding')
            self.typeheader = self.getheader('content-type')
            self.parsetype()
            self.parseplist()

    class DocHandler(BaseHTTPServer.BaseHTTPRequestHandler):
        def send_document(self, title, contents):
            try:
                self.send_response(200)
                self.send_header('Content-Type', 'text/html')
                self.end_headers()
                self.wfile.write(html.page(title, contents))
            except IOError: pass

        def do_GET(self):
            path = self.path
            if path[-5:] == '.html': path = path[:-5]
            if path[:1] == '/': path = path[1:]
            if path and path != '.':
                try:
                    obj = locate(path, forceload=1)
                except ErrorDuringImport, value:
                    self.send_document(path, html.escape(str(value)))
                    return
                if obj:
                    self.send_document(describe(obj), html.document(obj, path))
                else:
                    self.send_document(path,
'no Python documentation found for %s' % repr(path))
            else:
                heading = html.heading(
'<big><big><strong>Python: Index of Modules</strong></big></big>',
'#ffffff', '#7799ee')
                def bltinlink(name):
                    return '<a href="%s.html">%s</a>' % (name, name)
                names = filter(lambda x: x != '__main__',
                               sys.builtin_module_names)
                contents = html.multicolumn(names, bltinlink)
                indices = ['<p>' + html.bigsection(
                    'Built-in Modules', '#ffffff', '#ee77aa', contents)]

                seen = {}
                for dir in sys.path:
                    indices.append(html.index(dir, seen))
                contents = heading + join(indices) + '''<p align=right>
<font color="#909090" face="helvetica, arial"><strong>
pydoc</strong> by Ka-Ping Yee &lt;ping@lfw.org&gt;</font>'''
                self.send_document('Index of Modules', contents)

        def log_message(self, *args): pass

    class DocServer(BaseHTTPServer.HTTPServer):
        def __init__(self, port, callback):
            host = 'localhost'
            self.address = (host, port)
            self.callback = callback
            self.base.__init__(self, self.address, self.handler)

        def serve_until_quit(self):
            import select
            self.quit = False
            while not self.quit:
                rd, wr, ex = select.select([self.socket.fileno()], [], [], 1)
                if rd: self.handle_request()

        def server_activate(self):
            self.base.server_activate(self)
            self.url = 'http://%s:%d/' % (self.address[0], self.server_port)
            if self.callback: self.callback(self)

    DocServer.base = BaseHTTPServer.HTTPServer
    DocServer.handler = DocHandler
    DocHandler.MessageClass = Message
    try:
        try:
            DocServer(port, callback).serve_until_quit()
        except (KeyboardInterrupt, select.error):
            pass
    finally:
        if completer: completer()

# ----------------------------------------------------- graphical interface

def gui():
    """Graphical interface (starts web server and pops up a control window)."""
    class GUI:
        def __init__(self, window, port=7464):
            self.window = window
            self.server = None
            self.scanner = None

            import Tkinter
            self.server_frm = Tkinter.Frame(window)
            self.title_lbl = Tkinter.Label(self.server_frm,
                text='Starting server...\n ')
            self.open_btn = Tkinter.Button(self.server_frm,
                text='open browser', command=self.open, state='disabled')
            self.quit_btn = Tkinter.Button(self.server_frm,
                text='quit serving', command=self.quit, state='disabled')

            self.search_frm = Tkinter.Frame(window)
            self.search_lbl = Tkinter.Label(self.search_frm, text='Search for')
            self.search_ent = Tkinter.Entry(self.search_frm)
            self.search_ent.bind('<Return>', self.search)
            self.stop_btn = Tkinter.Button(self.search_frm,
                text='stop', pady=0, command=self.stop, state='disabled')
            if sys.platform == 'win32':
                # Trying to hide and show this button crashes under Windows.
                self.stop_btn.pack(side='right')

            self.window.title('pydoc')
            self.window.protocol('WM_DELETE_WINDOW', self.quit)
            self.title_lbl.pack(side='top', fill='x')
            self.open_btn.pack(side='left', fill='x', expand=1)
            self.quit_btn.pack(side='right', fill='x', expand=1)
            self.server_frm.pack(side='top', fill='x')

            self.search_lbl.pack(side='left')
            self.search_ent.pack(side='right', fill='x', expand=1)
            self.search_frm.pack(side='top', fill='x')
            self.search_ent.focus_set()

            font = ('helvetica', sys.platform == 'win32' and 8 or 10)
            self.result_lst = Tkinter.Listbox(window, font=font, height=6)
            self.result_lst.bind('<Button-1>', self.select)
            self.result_lst.bind('<Double-Button-1>', self.goto)
            self.result_scr = Tkinter.Scrollbar(window,
                orient='vertical', command=self.result_lst.yview)
            self.result_lst.config(yscrollcommand=self.result_scr.set)

            self.result_frm = Tkinter.Frame(window)
            self.goto_btn = Tkinter.Button(self.result_frm,
                text='go to selected', command=self.goto)
            self.hide_btn = Tkinter.Button(self.result_frm,
                text='hide results', command=self.hide)
            self.goto_btn.pack(side='left', fill='x', expand=1)
            self.hide_btn.pack(side='right', fill='x', expand=1)

            self.window.update()
            self.minwidth = self.window.winfo_width()
            self.minheight = self.window.winfo_height()
            self.bigminheight = (self.server_frm.winfo_reqheight() +
                                 self.search_frm.winfo_reqheight() +
                                 self.result_lst.winfo_reqheight() +
                                 self.result_frm.winfo_reqheight())
            self.bigwidth, self.bigheight = self.minwidth, self.bigminheight
            self.expanded = 0
            self.window.wm_geometry('%dx%d' % (self.minwidth, self.minheight))
            self.window.wm_minsize(self.minwidth, self.minheight)
            self.window.tk.willdispatch()

            import threading
            threading.Thread(
                target=serve, args=(port, self.ready, self.quit)).start()

        def ready(self, server):
            self.server = server
            self.title_lbl.config(
                text='Python documentation server at\n' + server.url)
            self.open_btn.config(state='normal')
            self.quit_btn.config(state='normal')

        def open(self, event=None, url=None):
            url = url or self.server.url
            try:
                import webbrowser
                webbrowser.open(url)
            except ImportError: # pre-webbrowser.py compatibility
                if sys.platform == 'win32':
                    os.system('start "%s"' % url)
                else:
                    rc = os.system('netscape -remote "openURL(%s)" &' % url)
                    if rc: os.system('netscape "%s" &' % url)

        def quit(self, event=None):
            if self.server:
                self.server.quit = 1
            self.window.quit()

        def search(self, event=None):
            key = self.search_ent.get()
            self.stop_btn.pack(side='right')
            self.stop_btn.config(state='normal')
            self.search_lbl.config(text='Searching for "%s"...' % key)
            self.search_ent.forget()
            self.search_lbl.pack(side='left')
            self.result_lst.delete(0, 'end')
            self.goto_btn.config(state='disabled')
            self.expand()

            import threading
            if self.scanner:
                self.scanner.quit = 1
            self.scanner = ModuleScanner()
            def onerror(modname):
                pass
            threading.Thread(target=self.scanner.run,
                             args=(self.update, key, self.done),
                             kwargs=dict(onerror=onerror)).start()

        def update(self, path, modname, desc):
            if modname[-9:] == '.__init__':
                modname = modname[:-9] + ' (package)'
            self.result_lst.insert('end',
                modname + ' - ' + (desc or '(no description)'))

        def stop(self, event=None):
            if self.scanner:
                self.scanner.quit = 1
                self.scanner = None

        def done(self):
            self.scanner = None
            self.search_lbl.config(text='Search for')
            self.search_lbl.pack(side='left')
            self.search_ent.pack(side='right', fill='x', expand=1)
            if sys.platform != 'win32': self.stop_btn.forget()
            self.stop_btn.config(state='disabled')

        def select(self, event=None):
            self.goto_btn.config(state='normal')

        def goto(self, event=None):
            selection = self.result_lst.curselection()
            if selection:
                modname = split(self.result_lst.get(selection[0]))[0]
                self.open(url=self.server.url + modname + '.html')

        def collapse(self):
            if not self.expanded: return
            self.result_frm.forget()
            self.result_scr.forget()
            self.result_lst.forget()
            self.bigwidth = self.window.winfo_width()
            self.bigheight = self.window.winfo_height()
            self.window.wm_geometry('%dx%d' % (self.minwidth, self.minheight))
            self.window.wm_minsize(self.minwidth, self.minheight)
            self.expanded = 0

        def expand(self):
            if self.expanded: return
            self.result_frm.pack(side='bottom', fill='x')
            self.result_scr.pack(side='right', fill='y')
            self.result_lst.pack(side='top', fill='both', expand=1)
            self.window.wm_geometry('%dx%d' % (self.bigwidth, self.bigheight))
            self.window.wm_minsize(self.minwidth, self.bigminheight)
            self.expanded = 1

        def hide(self, event=None):
            self.stop()
            self.collapse()

    import Tkinter
    try:
        root = Tkinter.Tk()
        # Tk will crash if pythonw.exe has an XP .manifest
        # file and the root has is not destroyed explicitly.
        # If the problem is ever fixed in Tk, the explicit
        # destroy can go.
        try:
            gui = GUI(root)
            root.mainloop()
        finally:
            root.destroy()
    except KeyboardInterrupt:
        pass

# -------------------------------------------------- command-line interface

def ispath(x):
    return isinstance(x, str) and find(x, os.sep) >= 0

def cli():
    """Command-line interface (looks at sys.argv to decide what to do)."""
    import getopt
    class BadUsage: pass

    # Scripts don't get the current directory in their path by default
    # unless they are run with the '-m' switch
    if '' not in sys.path:
        scriptdir = os.path.dirname(sys.argv[0])
        if scriptdir in sys.path:
            sys.path.remove(scriptdir)
        sys.path.insert(0, '.')

    try:
        opts, args = getopt.getopt(sys.argv[1:], 'gk:p:w')
        writing = 0

        for opt, val in opts:
            if opt == '-g':
                gui()
                return
            if opt == '-k':
                apropos(val)
                return
            if opt == '-p':
                try:
                    port = int(val)
                except ValueError:
                    raise BadUsage
                def ready(server):
                    print 'pydoc server ready at %s' % server.url
                def stopped():
                    print 'pydoc server stopped'
                serve(port, ready, stopped)
                return
            if opt == '-w':
                writing = 1

        if not args: raise BadUsage
        for arg in args:
            if ispath(arg) and not os.path.exists(arg):
                print 'file %r does not exist' % arg
                break
            try:
                if ispath(arg) and os.path.isfile(arg):
                    arg = importfile(arg)
                if writing:
                    if ispath(arg) and os.path.isdir(arg):
                        writedocs(arg)
                    else:
                        writedoc(arg)
                else:
                    help.help(arg)
            except ErrorDuringImport, value:
                print value

    except (getopt.error, BadUsage):
        cmd = os.path.basename(sys.argv[0])
        print """pydoc - the Python documentation tool

%s <name> ...
    Show text documentation on something.  <name> may be the name of a
    Python keyword, topic, function, module, or package, or a dotted
    reference to a class or function within a module or module in a
    package.  If <name> contains a '%s', it is used as the path to a
    Python source file to document. If name is 'keywords', 'topics',
    or 'modules', a listing of these things is displayed.

%s -k <keyword>
    Search for a keyword in the synopsis lines of all available modules.

%s -p <port>
    Start an HTTP server on the given port on the local machine.  Port
    number 0 can be used to get an arbitrary unused port.

%s -w <name> ...
    Write out the HTML documentation for a module to a file in the current
    directory.  If <name> contains a '%s', it is treated as a filename; if
    it names a directory, documentation is written for all the contents.
""" % (cmd, os.sep, cmd, cmd, cmd, os.sep)

if __name__ == '__main__': cli()
�
zfc@s]dZddlmZeddd�[ddlZddlZddgZejd	�Zejd
�Zejd�Z	ejd�Z
ejd
�Zejd�Zejd�Z
ejd�Zejd�Zejd�Zejd�Zdefd��YZdejfd��YZdefd��YZed�ZedkrYe�ndS(s;A parser for SGML, using the derived class as a static DTD.i����(twarnpy3ks1the sgmllib module has been removed in Python 3.0t
stackleveliNt
SGMLParsertSGMLParseErrors[&<]sN&([a-zA-Z][a-zA-Z0-9]*|#[0-9]*)?|<([a-zA-Z][^<>]*|/([a-zA-Z][^<>]*)?|![^<>]*)?s%&([a-zA-Z][-.a-zA-Z0-9]*)[^a-zA-Z0-9]s&#([0-9]+)[^0-9]s
<[>a-zA-Z]s<[a-zA-Z][-.a-zA-Z0-9]*/s"<([a-zA-Z][-.a-zA-Z0-9]*)/([^/]*)/t>s[<>]s[a-zA-Z][-_.a-zA-Z0-9]*se\s*([a-zA-Z_][-:.a-zA-Z_0-9]*)(\s*=\s*(\'[^\']*\'|"[^"]*"|[][\-a-zA-Z0-9./,:;+*%?!&$\(\)_#=~\'"@]*))?cBseZdZRS(s&Exception raised for all parse errors.(t__name__t
__module__t__doc__(((s/usr/lib64/python2.7/sgmllib.pyR,scBsieZejd�Zdd�Zd�Zd�Zd�Zd�Z	d�Z
d�Zd	�Zd
Z
d�Zd�Zd
�Zd�Zd�Zd�Zd�Zd�Zd�Zd�Zd�Zd�Zd�Zd�Zidd6dd6dd6dd 6d!d"6Zd#�Zd$�Zd%�Zd&�Z d'�Z!d(�Z"d)�Z#d*�Z$d+�Z%d,�Z&RS(-s+&(?:([a-zA-Z][-.a-zA-Z0-9]*)|#([0-9]+))(;?)icCs||_|j�dS(s#Initialize and reset this instance.N(tverbosetreset(tselfR((s/usr/lib64/python2.7/sgmllib.pyt__init__Bs	cCsJd|_d|_g|_d|_d|_d|_tjj	|�dS(s0Reset this instance. Loses all unprocessed data.ts???iN(
tNonet_SGMLParser__starttag_texttrawdatatstacktlasttagt
nomoretagstliteralt
markupbaset
ParserBaseR	(R
((s/usr/lib64/python2.7/sgmllib.pyR	Gs						cCsd|_|_dS(sYEnter literal mode (CDATA) till EOF.

        Intended for derived classes only.
        iN(RR(R
((s/usr/lib64/python2.7/sgmllib.pyt
setnomoretagsQscGs
d|_dS(sPEnter literal mode (CDATA).

        Intended for derived classes only.
        iN(R(R
targs((s/usr/lib64/python2.7/sgmllib.pyt
setliteralXscCs!|j||_|jd�dS(s�Feed some data to the parser.

        Call this as often as you want, with as little or as much text
        as you want (may include '
').  (This just saves the text,
        all the processing is done by goahead().)
        iN(Rtgoahead(R
tdata((s/usr/lib64/python2.7/sgmllib.pytfeed_scCs|jd�dS(sHandle the remaining data.iN(R(R
((s/usr/lib64/python2.7/sgmllib.pytclosejscCst|��dS(N(R(R
tmessage((s/usr/lib64/python2.7/sgmllib.pyterrornsc	Cs@|j}d}t|�}x�||kr�|jrQ|j|||!�|}Pntj||�}|rx|j�}n|}||kr�|j|||!�n|}||kr�Pn||dkrltj||�r.|j	r|j||�|d}qn|j
|�}|dkr"Pn|}qn|jd|�rt|j|�}|dkr_Pn|}d|_	qn|j	r�||dkr�|jd�|d}qPqn|jd|�r�|j
|�}|dkr�Pn|}qn|jd|�r,|j|�}|dkrPn||}qn|jd|�r�|j|�}|dkr]Pn|}qq�n||dkr}|j	r�|j||�|d}qntj||�}|r|jd�}|j|�|jd�}||dd	kr|d}qqntj||�}|r�|jd�}|j|�|jd�}||dd	kr|d}qqq�n
|jd
�tj||�}|s�|j||�|d}qn|jd�}||kr�Pn|j|||!�|}qW|r/||kr/|j|||!�|}n|||_dS(Nit<is</s<!--s<?s<!t&t;sneither < nor & ??(RtlenRthandle_datatinterestingtsearchtstarttstarttagopentmatchRtparse_starttagt
startswithtparse_endtagt
parse_commenttparse_pitparse_declarationtcharreftgroupthandle_charreftendt	entityrefthandle_entityrefRt
incomplete(	R
R2RtitnR(tjtktname((s/usr/lib64/python2.7/sgmllib.pyRts�			
		


		




	


	t=cCs�|j}|||d!dkr0|jd�ntj||d�}|sPdS|jd�}|j||d|!�|jd�}||S(Nis<?sunexpected call to parse_pi()i����i(RRtpicloseR%R&t	handle_piR2(R
R6RR(R8((s/usr/lib64/python2.7/sgmllib.pyR-�s	cCs|jS(N(R(R
((s/usr/lib64/python2.7/sgmllib.pytget_starttag_text�sc
Cs�d|_|}|j}tj||�r�tj||�}|sFdS|jdd�\}}d||_|j�}|jd�}|j	||�|||jd�d!|_|St
j||d�}|s�dS|jd�}g}	|||d!dkr|}|j
}n[tj||d�}|sA|jd�n|jd�}||d|!j�}||_
x�||krctj||�}|s�Pn|jddd�\}
}}|s�|
}np|d d	ko�|dkns|d d
ko|dknr |dd!}n|jj|j|�}|	j|
j�|f�|jd�}qsW||dkr�|d}n|||!|_|j||	�|S(Ni����iis<%s/is<>s!unexpected call to parse_starttagit't"R(R
RRtshorttagopenR(tshorttagR0tlowerR2tfinish_shorttagt
endbracketR%R&RttagfindRtattrfindtentity_or_charreftsubt_convert_reftappendtfinish_starttag(
R
R6t	start_posRR(ttagRR9R8tattrstattrnametrestt	attrvalue((s/usr/lib64/python2.7/sgmllib.pyR)�s^		
		$$	
cCs�|jd�r9|j|jd��p8d|j�dS|jd�rq|j|jd��ppd|jd�Sd|jd�SdS(Nis&#%s%siis&%s;s&%s(R0tconvert_charreftgroupstconvert_entityref(R
R(((s/usr/lib64/python2.7/sgmllib.pyRJ,scCs�|j}tj||d�}|s)dS|jd�}||d|!j�j�}||dkrr|d}n|j|�|S(Nii����iiR(RRER%R&tstripRCt
finish_endtag(R
R6RR(R8RN((s/usr/lib64/python2.7/sgmllib.pyR+7s	

cCs.|j|g�|j|�|j|�dS(N(RLR#RW(R
RNR((s/usr/lib64/python2.7/sgmllib.pyRDDs
cCs�yt|d|�}Wndtk
r}yt|d|�}Wn"tk
rb|j||�dSX|j|||�dSn(X|jj|�|j|||�dSdS(Ntstart_tdo_i����ii(tgetattrtAttributeErrortunknown_starttagthandle_starttagRRK(R
RNROtmethod((s/usr/lib64/python2.7/sgmllib.pyRLKs

cCsa|s9t|j�d}|dkr�|j|�dSn�||jkr�yt|d|�}Wntk
r|j|�nX|j|�dSt|j�}x0t|�D]"}|j||kr�|}q�q�Wx�t|j�|kr\|jd}yt|d|�}Wntk
r(d}nX|rB|j||�n
|j|�|jd=q�WdS(Niitend_i����(	R"Rtunknown_endtagRZR[treport_unbalancedtrangeR
t
handle_endtag(R
RNtfoundR^R6((s/usr/lib64/python2.7/sgmllib.pyRW]s4







cCs||�dS(N((R
RNR^RO((s/usr/lib64/python2.7/sgmllib.pyR]|scCs|�dS(N((R
RNR^((s/usr/lib64/python2.7/sgmllib.pyRc�scCs)|jr%d|dGHdG|jGHndS(Ns*** Unbalanced </Rs
*** Stack:(RR(R
RN((s/usr/lib64/python2.7/sgmllib.pyRa�s	
cCsRyt|�}Wntk
r$dSXd|ko<dknsEdS|j|�S(s/Convert character reference, may be overridden.Nii(tintt
ValueErrortconvert_codepoint(R
R:R7((s/usr/lib64/python2.7/sgmllib.pyRS�s
cCs
t|�S(N(tchr(R
t	codepoint((s/usr/lib64/python2.7/sgmllib.pyRg�scCs<|j|�}|dkr+|j|�n
|j|�dS(s0Handle character reference, no need to override.N(RSR
tunknown_charrefR#(R
R:treplacement((s/usr/lib64/python2.7/sgmllib.pyR1�sRtltRtgtR tampR@tquots'taposcCs%|j}||kr||SdSdS(s�Convert entity references.

        As an alternative to overriding this method; one can tailor the
        results by setting up the self.entitydefs mapping appropriately.
        N(t
entitydefs(R
R:ttable((s/usr/lib64/python2.7/sgmllib.pyRU�s	cCs<|j|�}|dkr+|j|�n
|j|�dS(s.Handle entity references, no need to override.N(RUR
tunknown_entityrefR#(R
R:Rk((s/usr/lib64/python2.7/sgmllib.pyR4�scCsdS(N((R
R((s/usr/lib64/python2.7/sgmllib.pyR#�scCsdS(N((R
R((s/usr/lib64/python2.7/sgmllib.pythandle_comment�scCsdS(N((R
tdecl((s/usr/lib64/python2.7/sgmllib.pythandle_decl�scCsdS(N((R
R((s/usr/lib64/python2.7/sgmllib.pyR=�scCsdS(N((R
RNRO((s/usr/lib64/python2.7/sgmllib.pyR\�RcCsdS(N((R
RN((s/usr/lib64/python2.7/sgmllib.pyR`�RcCsdS(N((R
tref((s/usr/lib64/python2.7/sgmllib.pyRj�RcCsdS(N((R
Rw((s/usr/lib64/python2.7/sgmllib.pyRs�R('RRtretcompileRHRR	RRRRRRt_decl_othercharsR-R>R)RJR+RDRLRWR]RcRaRSRgR1RqRUR4R#RtRvR=R\R`RjRs(((s/usr/lib64/python2.7/sgmllib.pyR<sF	
						g			>		
							
		
)										tTestSGMLParsercBseeZdd�Zd�Zd�Zd�Zd�Zd�Zd�Zd�Z	d	�Z
d
�ZRS(icCsd|_tj||�dS(NR(ttestdataRR(R
R((s/usr/lib64/python2.7/sgmllib.pyR�s	cCs<|j||_tt|j��dkr8|j�ndS(NiF(R|R"treprtflush(R
R((s/usr/lib64/python2.7/sgmllib.pyR#�scCs.|j}|r*d|_dGt|�GHndS(NRsdata:(R|R}(R
R((s/usr/lib64/python2.7/sgmllib.pyR~�s		cCsN|j�t|�}t|�dkrA|d d|d}ndG|GHdS(NiDi s...i��scomment:(R~R}R"(R
Rtr((s/usr/lib64/python2.7/sgmllib.pyRt�s

cCs\|j�|s d|dGHn8d|Gx(|D] \}}|dd|dGq/WdGHdS(Nsstart tag: <RR;R@(R~(R
RNROR:tvalue((s/usr/lib64/python2.7/sgmllib.pyR\�s
cCs|j�d|dGHdS(Nsend tag: </R(R~(R
RN((s/usr/lib64/python2.7/sgmllib.pyR`�s
cCs|j�d|dGHdS(Ns*** unknown entity ref: &R!(R~(R
Rw((s/usr/lib64/python2.7/sgmllib.pyRs�s
cCs|j�d|dGHdS(Ns*** unknown char ref: &#R!(R~(R
Rw((s/usr/lib64/python2.7/sgmllib.pyRj�s
cCs|j�d|dGHdS(Ns*** unknown decl: [t](R~(R
R((s/usr/lib64/python2.7/sgmllib.pytunknown_decl�s
cCstj|�|j�dS(N(RRR~(R
((s/usr/lib64/python2.7/sgmllib.pyR�s
(RRRR#R~RtR\R`RsRjR�R(((s/usr/lib64/python2.7/sgmllib.pyR{�s				
				c	Cs(ddl}|dkr(|jd}n|rQ|ddkrQ|d}t}nt}|rj|d}nd}|dkr�|j}nCyt|d�}Wn-tk
r�}|GdG|GH|jd�nX|j	�}||jk	r�|j
�n|�}x|D]}|j|�qW|j
�dS(	Ni����iis-ss	test.htmlt-Rt:(tsysR
targvRR{tstdintopentIOErrortexittreadRR(	RR�tklasstfiletftmsgRtxtc((s/usr/lib64/python2.7/sgmllib.pyttests0
	


	
t__main__(RtwarningsRRRxt__all__RyR$R5R3R/R'RARBR<RERFRGtRuntimeErrorRRRR{R
R�R(((s/usr/lib64/python2.7/sgmllib.pyt<module>s4		��7$�
zfc@s�dZdZdddddddd	d
ddd
ddddddgZddlZddlZddlmZmZmZmZm	Z	m
Z
mZmZm
Z
mZmZmZmZejZdZdZdZdejfd��YZdejefd��YZd	ejefd��YZdejefd��YZeje	�x*e
ee
eefD]Zeje�qcWx!eefD]Zeje�q�W[dS(s�The io module provides the Python interfaces to stream handling. The
builtin open function is defined in this module.

At the top of the I/O hierarchy is the abstract base class IOBase. It
defines the basic interface to a stream. Note, however, that there is no
separation between reading and writing to streams; implementations are
allowed to raise an IOError if they do not support a given operation.

Extending IOBase is RawIOBase which deals simply with the reading and
writing of raw bytes to a stream. FileIO subclasses RawIOBase to provide
an interface to OS files.

BufferedIOBase deals with buffering on a raw byte stream (RawIOBase). Its
subclasses, BufferedWriter, BufferedReader, and BufferedRWPair buffer
streams that are readable, writable, and both respectively.
BufferedRandom provides a buffered interface to random access
streams. BytesIO is a simple stream of in-memory bytes.

Another IOBase subclass, TextIOBase, deals with the encoding and decoding
of streams into text. TextIOWrapper, which extends it, is a buffered text
interface to a buffered raw stream (`BufferedIOBase`). Finally, StringIO
is an in-memory stream for text.

Argument names are not part of the specification, and only the arguments
of open() are intended to be used as keyword arguments.

data:

DEFAULT_BUFFER_SIZE

   An int containing the default buffer size used by the module's buffered
   I/O classes. open() uses the file's blksize (as obtained by os.stat) if
   possible.
s�Guido van Rossum <guido@python.org>, Mike Verdone <mike.verdone@gmail.com>, Mark Russell <mark.russell@zen.co.uk>, Antoine Pitrou <solipsis@pitrou.net>, Amaury Forgeot d'Arc <amauryfa@gmail.com>, Benjamin Peterson <benjamin@python.org>tBlockingIOErrortopentIOBaset	RawIOBasetFileIOtBytesIOtStringIOtBufferedIOBasetBufferedReadertBufferedWritertBufferedRWPairtBufferedRandomt
TextIOBaset
TextIOWrappertUnsupportedOperationtSEEK_SETtSEEK_CURtSEEK_ENDi����N(
tDEFAULT_BUFFER_SIZERRRRRRRR	R
RtIncrementalNewlineDecoderR
iiicBseZejZejjZRS((t__name__t
__module__tabctABCMetat
__metaclass__t_iot_IOBaset__doc__(((s/usr/lib64/python2.7/io.pyREs	cBseZejjZRS((RRRt
_RawIOBaseR(((s/usr/lib64/python2.7/io.pyRIscBseZejjZRS((RRRt_BufferedIOBaseR(((s/usr/lib64/python2.7/io.pyRLscBseZejjZRS((RRRt_TextIOBaseR(((s/usr/lib64/python2.7/io.pyROs( Rt
__author__t__all__RRRRRRRRRRR	R
RRR
tOpenWrapperRRRRRRRRRRRtregistertklass(((s/usr/lib64/python2.7/io.pyt<module>"s0			X	

####
# Copyright 2000 by Timothy O'Malley <timo@alum.mit.edu>
#
#                All Rights Reserved
#
# Permission to use, copy, modify, and distribute this software
# and its documentation for any purpose and without fee is hereby
# granted, provided that the above copyright notice appear in all
# copies and that both that copyright notice and this permission
# notice appear in supporting documentation, and that the name of
# Timothy O'Malley  not be used in advertising or publicity
# pertaining to distribution of the software without specific, written
# prior permission.
#
# Timothy O'Malley DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS
# SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
# AND FITNESS, IN NO EVENT SHALL Timothy O'Malley BE LIABLE FOR
# ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
# WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
# ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
# PERFORMANCE OF THIS SOFTWARE.
#
####
#
# Id: Cookie.py,v 2.29 2000/08/23 05:28:49 timo Exp
#   by Timothy O'Malley <timo@alum.mit.edu>
#
#  Cookie.py is a Python module for the handling of HTTP
#  cookies as a Python dictionary.  See RFC 2109 for more
#  information on cookies.
#
#  The original idea to treat Cookies as a dictionary came from
#  Dave Mitchell (davem@magnet.com) in 1995, when he released the
#  first version of nscookie.py.
#
####

r"""
Here's a sample session to show how to use this module.
At the moment, this is the only documentation.

The Basics
----------

Importing is easy..

   >>> import Cookie

Most of the time you start by creating a cookie.  Cookies come in
three flavors, each with slightly different encoding semantics, but
more on that later.

   >>> C = Cookie.SimpleCookie()
   >>> C = Cookie.SerialCookie()
   >>> C = Cookie.SmartCookie()

[Note: Long-time users of Cookie.py will remember using
Cookie.Cookie() to create a Cookie object.  Although deprecated, it
is still supported by the code.  See the Backward Compatibility notes
for more information.]

Once you've created your Cookie, you can add values just as if it were
a dictionary.

   >>> C = Cookie.SmartCookie()
   >>> C["fig"] = "newton"
   >>> C["sugar"] = "wafer"
   >>> C.output()
   'Set-Cookie: fig=newton\r\nSet-Cookie: sugar=wafer'

Notice that the printable representation of a Cookie is the
appropriate format for a Set-Cookie: header.  This is the
default behavior.  You can change the header and printed
attributes by using the .output() function

   >>> C = Cookie.SmartCookie()
   >>> C["rocky"] = "road"
   >>> C["rocky"]["path"] = "/cookie"
   >>> print C.output(header="Cookie:")
   Cookie: rocky=road; Path=/cookie
   >>> print C.output(attrs=[], header="Cookie:")
   Cookie: rocky=road

The load() method of a Cookie extracts cookies from a string.  In a
CGI script, you would use this method to extract the cookies from the
HTTP_COOKIE environment variable.

   >>> C = Cookie.SmartCookie()
   >>> C.load("chips=ahoy; vienna=finger")
   >>> C.output()
   'Set-Cookie: chips=ahoy\r\nSet-Cookie: vienna=finger'

The load() method is darn-tootin smart about identifying cookies
within a string.  Escaped quotation marks, nested semicolons, and other
such trickeries do not confuse it.

   >>> C = Cookie.SmartCookie()
   >>> C.load('keebler="E=everybody; L=\\"Loves\\"; fudge=\\012;";')
   >>> print C
   Set-Cookie: keebler="E=everybody; L=\"Loves\"; fudge=\012;"

Each element of the Cookie also supports all of the RFC 2109
Cookie attributes.  Here's an example which sets the Path
attribute.

   >>> C = Cookie.SmartCookie()
   >>> C["oreo"] = "doublestuff"
   >>> C["oreo"]["path"] = "/"
   >>> print C
   Set-Cookie: oreo=doublestuff; Path=/

Each dictionary element has a 'value' attribute, which gives you
back the value associated with the key.

   >>> C = Cookie.SmartCookie()
   >>> C["twix"] = "none for you"
   >>> C["twix"].value
   'none for you'


A Bit More Advanced
-------------------

As mentioned before, there are three different flavors of Cookie
objects, each with different encoding/decoding semantics.  This
section briefly discusses the differences.

SimpleCookie

The SimpleCookie expects that all values should be standard strings.
Just to be sure, SimpleCookie invokes the str() builtin to convert
the value to a string, when the values are set dictionary-style.

   >>> C = Cookie.SimpleCookie()
   >>> C["number"] = 7
   >>> C["string"] = "seven"
   >>> C["number"].value
   '7'
   >>> C["string"].value
   'seven'
   >>> C.output()
   'Set-Cookie: number=7\r\nSet-Cookie: string=seven'


SerialCookie

The SerialCookie expects that all values should be serialized using
cPickle (or pickle, if cPickle isn't available).  As a result of
serializing, SerialCookie can save almost any Python object to a
value, and recover the exact same object when the cookie has been
returned.  (SerialCookie can yield some strange-looking cookie
values, however.)

   >>> C = Cookie.SerialCookie()
   >>> C["number"] = 7
   >>> C["string"] = "seven"
   >>> C["number"].value
   7
   >>> C["string"].value
   'seven'
   >>> C.output()
   'Set-Cookie: number="I7\\012."\r\nSet-Cookie: string="S\'seven\'\\012p1\\012."'

Be warned, however, if SerialCookie cannot de-serialize a value (because
it isn't a valid pickle'd object), IT WILL RAISE AN EXCEPTION.


SmartCookie

The SmartCookie combines aspects of each of the other two flavors.
When setting a value in a dictionary-fashion, the SmartCookie will
serialize (ala cPickle) the value *if and only if* it isn't a
Python string.  String objects are *not* serialized.  Similarly,
when the load() method parses out values, it attempts to de-serialize
the value.  If it fails, then it fallsback to treating the value
as a string.

   >>> C = Cookie.SmartCookie()
   >>> C["number"] = 7
   >>> C["string"] = "seven"
   >>> C["number"].value
   7
   >>> C["string"].value
   'seven'
   >>> C.output()
   'Set-Cookie: number="I7\\012."\r\nSet-Cookie: string=seven'


Backwards Compatibility
-----------------------

In order to keep compatibility with earlier versions of Cookie.py,
it is still possible to use Cookie.Cookie() to create a Cookie.  In
fact, this simply returns a SmartCookie.

   >>> C = Cookie.Cookie()
   >>> print C.__class__.__name__
   SmartCookie


Finis.
"""  #"
#     ^
#     |----helps out font-lock

#
# Import our required modules
#
import string

try:
    from cPickle import dumps, loads
except ImportError:
    from pickle import dumps, loads

import re, warnings

__all__ = ["CookieError","BaseCookie","SimpleCookie","SerialCookie",
           "SmartCookie","Cookie"]

_nulljoin = ''.join
_semispacejoin = '; '.join
_spacejoin = ' '.join

#
# Define an exception visible to External modules
#
class CookieError(Exception):
    pass


# These quoting routines conform to the RFC2109 specification, which in
# turn references the character definitions from RFC2068.  They provide
# a two-way quoting algorithm.  Any non-text character is translated
# into a 4 character sequence: a forward-slash followed by the
# three-digit octal equivalent of the character.  Any '\' or '"' is
# quoted with a preceding '\' slash.
#
# These are taken from RFC2068 and RFC2109.
#       _LegalChars       is the list of chars which don't require "'s
#       _Translator       hash-table for fast quoting
#
_LegalChars       = string.ascii_letters + string.digits + "!#$%&'*+-.^_`|~"
_Translator       = {
    '\000' : '\\000',  '\001' : '\\001',  '\002' : '\\002',
    '\003' : '\\003',  '\004' : '\\004',  '\005' : '\\005',
    '\006' : '\\006',  '\007' : '\\007',  '\010' : '\\010',
    '\011' : '\\011',  '\012' : '\\012',  '\013' : '\\013',
    '\014' : '\\014',  '\015' : '\\015',  '\016' : '\\016',
    '\017' : '\\017',  '\020' : '\\020',  '\021' : '\\021',
    '\022' : '\\022',  '\023' : '\\023',  '\024' : '\\024',
    '\025' : '\\025',  '\026' : '\\026',  '\027' : '\\027',
    '\030' : '\\030',  '\031' : '\\031',  '\032' : '\\032',
    '\033' : '\\033',  '\034' : '\\034',  '\035' : '\\035',
    '\036' : '\\036',  '\037' : '\\037',

    # Because of the way browsers really handle cookies (as opposed
    # to what the RFC says) we also encode , and ;

    ',' : '\\054', ';' : '\\073',

    '"' : '\\"',       '\\' : '\\\\',

    '\177' : '\\177',  '\200' : '\\200',  '\201' : '\\201',
    '\202' : '\\202',  '\203' : '\\203',  '\204' : '\\204',
    '\205' : '\\205',  '\206' : '\\206',  '\207' : '\\207',
    '\210' : '\\210',  '\211' : '\\211',  '\212' : '\\212',
    '\213' : '\\213',  '\214' : '\\214',  '\215' : '\\215',
    '\216' : '\\216',  '\217' : '\\217',  '\220' : '\\220',
    '\221' : '\\221',  '\222' : '\\222',  '\223' : '\\223',
    '\224' : '\\224',  '\225' : '\\225',  '\226' : '\\226',
    '\227' : '\\227',  '\230' : '\\230',  '\231' : '\\231',
    '\232' : '\\232',  '\233' : '\\233',  '\234' : '\\234',
    '\235' : '\\235',  '\236' : '\\236',  '\237' : '\\237',
    '\240' : '\\240',  '\241' : '\\241',  '\242' : '\\242',
    '\243' : '\\243',  '\244' : '\\244',  '\245' : '\\245',
    '\246' : '\\246',  '\247' : '\\247',  '\250' : '\\250',
    '\251' : '\\251',  '\252' : '\\252',  '\253' : '\\253',
    '\254' : '\\254',  '\255' : '\\255',  '\256' : '\\256',
    '\257' : '\\257',  '\260' : '\\260',  '\261' : '\\261',
    '\262' : '\\262',  '\263' : '\\263',  '\264' : '\\264',
    '\265' : '\\265',  '\266' : '\\266',  '\267' : '\\267',
    '\270' : '\\270',  '\271' : '\\271',  '\272' : '\\272',
    '\273' : '\\273',  '\274' : '\\274',  '\275' : '\\275',
    '\276' : '\\276',  '\277' : '\\277',  '\300' : '\\300',
    '\301' : '\\301',  '\302' : '\\302',  '\303' : '\\303',
    '\304' : '\\304',  '\305' : '\\305',  '\306' : '\\306',
    '\307' : '\\307',  '\310' : '\\310',  '\311' : '\\311',
    '\312' : '\\312',  '\313' : '\\313',  '\314' : '\\314',
    '\315' : '\\315',  '\316' : '\\316',  '\317' : '\\317',
    '\320' : '\\320',  '\321' : '\\321',  '\322' : '\\322',
    '\323' : '\\323',  '\324' : '\\324',  '\325' : '\\325',
    '\326' : '\\326',  '\327' : '\\327',  '\330' : '\\330',
    '\331' : '\\331',  '\332' : '\\332',  '\333' : '\\333',
    '\334' : '\\334',  '\335' : '\\335',  '\336' : '\\336',
    '\337' : '\\337',  '\340' : '\\340',  '\341' : '\\341',
    '\342' : '\\342',  '\343' : '\\343',  '\344' : '\\344',
    '\345' : '\\345',  '\346' : '\\346',  '\347' : '\\347',
    '\350' : '\\350',  '\351' : '\\351',  '\352' : '\\352',
    '\353' : '\\353',  '\354' : '\\354',  '\355' : '\\355',
    '\356' : '\\356',  '\357' : '\\357',  '\360' : '\\360',
    '\361' : '\\361',  '\362' : '\\362',  '\363' : '\\363',
    '\364' : '\\364',  '\365' : '\\365',  '\366' : '\\366',
    '\367' : '\\367',  '\370' : '\\370',  '\371' : '\\371',
    '\372' : '\\372',  '\373' : '\\373',  '\374' : '\\374',
    '\375' : '\\375',  '\376' : '\\376',  '\377' : '\\377'
    }

_idmap = ''.join(chr(x) for x in xrange(256))

def _quote(str, LegalChars=_LegalChars,
           idmap=_idmap, translate=string.translate):
    #
    # If the string does not need to be double-quoted,
    # then just return the string.  Otherwise, surround
    # the string in doublequotes and precede quote (with a \)
    # special characters.
    #
    if "" == translate(str, idmap, LegalChars):
        return str
    else:
        return '"' + _nulljoin( map(_Translator.get, str, str) ) + '"'
# end _quote


_OctalPatt = re.compile(r"\\[0-3][0-7][0-7]")
_QuotePatt = re.compile(r"[\\].")

def _unquote(str):
    # If there aren't any doublequotes,
    # then there can't be any special characters.  See RFC 2109.
    if  len(str) < 2:
        return str
    if str[0] != '"' or str[-1] != '"':
        return str

    # We have to assume that we must decode this string.
    # Down to work.

    # Remove the "s
    str = str[1:-1]

    # Check for special sequences.  Examples:
    #    \012 --> \n
    #    \"   --> "
    #
    i = 0
    n = len(str)
    res = []
    while 0 <= i < n:
        Omatch = _OctalPatt.search(str, i)
        Qmatch = _QuotePatt.search(str, i)
        if not Omatch and not Qmatch:              # Neither matched
            res.append(str[i:])
            break
        # else:
        j = k = -1
        if Omatch: j = Omatch.start(0)
        if Qmatch: k = Qmatch.start(0)
        if Qmatch and ( not Omatch or k < j ):     # QuotePatt matched
            res.append(str[i:k])
            res.append(str[k+1])
            i = k+2
        else:                                      # OctalPatt matched
            res.append(str[i:j])
            res.append( chr( int(str[j+1:j+4], 8) ) )
            i = j+4
    return _nulljoin(res)
# end _unquote

# The _getdate() routine is used to set the expiration time in
# the cookie's HTTP header.      By default, _getdate() returns the
# current time in the appropriate "expires" format for a
# Set-Cookie header.     The one optional argument is an offset from
# now, in seconds.      For example, an offset of -3600 means "one hour ago".
# The offset may be a floating point number.
#

_weekdayname = ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun']

_monthname = [None,
              'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun',
              'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']

def _getdate(future=0, weekdayname=_weekdayname, monthname=_monthname):
    from time import gmtime, time
    now = time()
    year, month, day, hh, mm, ss, wd, y, z = gmtime(now + future)
    return "%s, %02d %3s %4d %02d:%02d:%02d GMT" % \
           (weekdayname[wd], day, monthname[month], year, hh, mm, ss)


#
# A class to hold ONE key,value pair.
# In a cookie, each such pair may have several attributes.
#       so this class is used to keep the attributes associated
#       with the appropriate key,value pair.
# This class also includes a coded_value attribute, which
#       is used to hold the network representation of the
#       value.  This is most useful when Python objects are
#       pickled for network transit.
#

class Morsel(dict):
    # RFC 2109 lists these attributes as reserved:
    #   path       comment         domain
    #   max-age    secure      version
    #
    # For historical reasons, these attributes are also reserved:
    #   expires
    #
    # This is an extension from Microsoft:
    #   httponly
    #
    # This dictionary provides a mapping from the lowercase
    # variant on the left to the appropriate traditional
    # formatting on the right.
    _reserved = { "expires" : "expires",
                   "path"        : "Path",
                   "comment" : "Comment",
                   "domain"      : "Domain",
                   "max-age" : "Max-Age",
                   "secure"      : "secure",
                   "httponly"  : "httponly",
                   "version" : "Version",
                   }

    _flags = {'secure', 'httponly'}

    def __init__(self):
        # Set defaults
        self.key = self.value = self.coded_value = None

        # Set default attributes
        for K in self._reserved:
            dict.__setitem__(self, K, "")
    # end __init__

    def __setitem__(self, K, V):
        K = K.lower()
        if not K in self._reserved:
            raise CookieError("Invalid Attribute %s" % K)
        dict.__setitem__(self, K, V)
    # end __setitem__

    def isReservedKey(self, K):
        return K.lower() in self._reserved
    # end isReservedKey

    def set(self, key, val, coded_val,
            LegalChars=_LegalChars,
            idmap=_idmap, translate=string.translate):
        # First we verify that the key isn't a reserved word
        # Second we make sure it only contains legal characters
        if key.lower() in self._reserved:
            raise CookieError("Attempt to set a reserved key: %s" % key)
        if "" != translate(key, idmap, LegalChars):
            raise CookieError("Illegal key value: %s" % key)

        # It's a good key, so save it.
        self.key                 = key
        self.value               = val
        self.coded_value         = coded_val
    # end set

    def output(self, attrs=None, header = "Set-Cookie:"):
        return "%s %s" % ( header, self.OutputString(attrs) )

    __str__ = output

    def __repr__(self):
        return '<%s: %s=%s>' % (self.__class__.__name__,
                                self.key, repr(self.value) )

    def js_output(self, attrs=None):
        # Print javascript
        return """
        <script type="text/javascript">
        <!-- begin hiding
        document.cookie = \"%s\";
        // end hiding -->
        </script>
        """ % ( self.OutputString(attrs).replace('"',r'\"'), )
    # end js_output()

    def OutputString(self, attrs=None):
        # Build up our result
        #
        result = []
        RA = result.append

        # First, the key=value pair
        RA("%s=%s" % (self.key, self.coded_value))

        # Now add any defined attributes
        if attrs is None:
            attrs = self._reserved
        items = self.items()
        items.sort()
        for K,V in items:
            if V == "": continue
            if K not in attrs: continue
            if K == "expires" and type(V) == type(1):
                RA("%s=%s" % (self._reserved[K], _getdate(V)))
            elif K == "max-age" and type(V) == type(1):
                RA("%s=%d" % (self._reserved[K], V))
            elif K == "secure":
                RA(str(self._reserved[K]))
            elif K == "httponly":
                RA(str(self._reserved[K]))
            else:
                RA("%s=%s" % (self._reserved[K], V))

        # Return the result
        return _semispacejoin(result)
    # end OutputString
# end Morsel class



#
# Pattern for finding cookie
#
# This used to be strict parsing based on the RFC2109 and RFC2068
# specifications.  I have since discovered that MSIE 3.0x doesn't
# follow the character rules outlined in those specs.  As a
# result, the parsing rules here are less strict.
#

_LegalKeyChars  = r"\w\d!#%&'~_`><@,:/\$\*\+\-\.\^\|\)\(\?\}\{\="
_LegalValueChars = _LegalKeyChars + r"\[\]"
_CookiePattern = re.compile(
    r"(?x)"                       # This is a Verbose pattern
    r"\s*"                        # Optional whitespace at start of cookie
    r"(?P<key>"                   # Start of group 'key'
    "["+ _LegalKeyChars +"]+?"     # Any word of at least one letter, nongreedy
    r")"                          # End of group 'key'
    r"("                          # Optional group: there may not be a value.
    r"\s*=\s*"                    # Equal Sign
    r"(?P<val>"                   # Start of group 'val'
    r'"(?:[^\\"]|\\.)*"'            # Any doublequoted string
    r"|"                            # or
    r"\w{3},\s[\s\w\d-]{9,11}\s[\d:]{8}\sGMT" # Special case for "expires" attr
    r"|"                            # or
    "["+ _LegalValueChars +"]*"        # Any word or empty string
    r")"                          # End of group 'val'
    r")?"                         # End of optional value group
    r"\s*"                        # Any number of spaces.
    r"(\s+|;|$)"                  # Ending either at space, semicolon, or EOS.
    )


# At long last, here is the cookie class.
#   Using this class is almost just like using a dictionary.
# See this module's docstring for example usage.
#
class BaseCookie(dict):
    # A container class for a set of Morsels
    #

    def value_decode(self, val):
        """real_value, coded_value = value_decode(STRING)
        Called prior to setting a cookie's value from the network
        representation.  The VALUE is the value read from HTTP
        header.
        Override this function to modify the behavior of cookies.
        """
        return val, val
    # end value_encode

    def value_encode(self, val):
        """real_value, coded_value = value_encode(VALUE)
        Called prior to setting a cookie's value from the dictionary
        representation.  The VALUE is the value being assigned.
        Override this function to modify the behavior of cookies.
        """
        strval = str(val)
        return strval, strval
    # end value_encode

    def __init__(self, input=None):
        if input: self.load(input)
    # end __init__

    def __set(self, key, real_value, coded_value):
        """Private method for setting a cookie's value"""
        M = self.get(key, Morsel())
        M.set(key, real_value, coded_value)
        dict.__setitem__(self, key, M)
    # end __set

    def __setitem__(self, key, value):
        """Dictionary style assignment."""
        if isinstance(value, Morsel):
            # allow assignment of constructed Morsels (e.g. for pickling)
            dict.__setitem__(self, key, value)
        else:
            rval, cval = self.value_encode(value)
            self.__set(key, rval, cval)
    # end __setitem__

    def output(self, attrs=None, header="Set-Cookie:", sep="\015\012"):
        """Return a string suitable for HTTP."""
        result = []
        items = self.items()
        items.sort()
        for K,V in items:
            result.append( V.output(attrs, header) )
        return sep.join(result)
    # end output

    __str__ = output

    def __repr__(self):
        L = []
        items = self.items()
        items.sort()
        for K,V in items:
            L.append( '%s=%s' % (K,repr(V.value) ) )
        return '<%s: %s>' % (self.__class__.__name__, _spacejoin(L))

    def js_output(self, attrs=None):
        """Return a string suitable for JavaScript."""
        result = []
        items = self.items()
        items.sort()
        for K,V in items:
            result.append( V.js_output(attrs) )
        return _nulljoin(result)
    # end js_output

    def load(self, rawdata):
        """Load cookies from a string (presumably HTTP_COOKIE) or
        from a dictionary.  Loading cookies from a dictionary 'd'
        is equivalent to calling:
            map(Cookie.__setitem__, d.keys(), d.values())
        """
        if type(rawdata) == type(""):
            self.__ParseString(rawdata)
        else:
            # self.update() wouldn't call our custom __setitem__
            for k, v in rawdata.items():
                self[k] = v
        return
    # end load()

    def __ParseString(self, str, patt=_CookiePattern):
        i = 0            # Our starting point
        n = len(str)     # Length of string
        M = None         # current morsel

        while 0 <= i < n:
            # Start looking for a cookie
            match = patt.match(str, i)
            if not match: break          # No more cookies

            K,V = match.group("key"), match.group("val")
            i = match.end(0)

            # Parse the key, value in case it's metainfo
            if K[0] == "$":
                # We ignore attributes which pertain to the cookie
                # mechanism as a whole.  See RFC 2109.
                # (Does anyone care?)
                if M:
                    M[ K[1:] ] = V
            elif K.lower() in Morsel._reserved:
                if M:
                    if V is None:
                        if K.lower() in Morsel._flags:
                            M[K] = True
                    else:
                        M[K] = _unquote(V)
            elif V is not None:
                rval, cval = self.value_decode(V)
                self.__set(K, rval, cval)
                M = self[K]
    # end __ParseString
# end BaseCookie class

class SimpleCookie(BaseCookie):
    """SimpleCookie
    SimpleCookie supports strings as cookie values.  When setting
    the value using the dictionary assignment notation, SimpleCookie
    calls the builtin str() to convert the value to a string.  Values
    received from HTTP are kept as strings.
    """
    def value_decode(self, val):
        return _unquote( val ), val
    def value_encode(self, val):
        strval = str(val)
        return strval, _quote( strval )
# end SimpleCookie

class SerialCookie(BaseCookie):
    """SerialCookie
    SerialCookie supports arbitrary objects as cookie values. All
    values are serialized (using cPickle) before being sent to the
    client.  All incoming values are assumed to be valid Pickle
    representations.  IF AN INCOMING VALUE IS NOT IN A VALID PICKLE
    FORMAT, THEN AN EXCEPTION WILL BE RAISED.

    Note: Large cookie values add overhead because they must be
    retransmitted on every HTTP transaction.

    Note: HTTP has a 2k limit on the size of a cookie.  This class
    does not check for this limit, so be careful!!!
    """
    def __init__(self, input=None):
        warnings.warn("SerialCookie class is insecure; do not use it",
                      DeprecationWarning)
        BaseCookie.__init__(self, input)
    # end __init__
    def value_decode(self, val):
        # This could raise an exception!
        return loads( _unquote(val) ), val
    def value_encode(self, val):
        return val, _quote( dumps(val) )
# end SerialCookie

class SmartCookie(BaseCookie):
    """SmartCookie
    SmartCookie supports arbitrary objects as cookie values.  If the
    object is a string, then it is quoted.  If the object is not a
    string, however, then SmartCookie will use cPickle to serialize
    the object into a string representation.

    Note: Large cookie values add overhead because they must be
    retransmitted on every HTTP transaction.

    Note: HTTP has a 2k limit on the size of a cookie.  This class
    does not check for this limit, so be careful!!!
    """
    def __init__(self, input=None):
        warnings.warn("Cookie/SmartCookie class is insecure; do not use it",
                      DeprecationWarning)
        BaseCookie.__init__(self, input)
    # end __init__
    def value_decode(self, val):
        strval = _unquote(val)
        try:
            return loads(strval), val
        except:
            return strval, val
    def value_encode(self, val):
        if type(val) == type(""):
            return val, _quote(val)
        else:
            return val, _quote( dumps(val) )
# end SmartCookie


###########################################################
# Backwards Compatibility:  Don't break any existing code!

# We provide Cookie() as an alias for SmartCookie()
Cookie = SmartCookie

#
###########################################################

def _test():
    import doctest, Cookie
    return doctest.testmod(Cookie)

if __name__ == "__main__":
    _test()


#Local Variables:
#tab-width: 4
#end:
�
zfc@sdZddddddddd	d
ddd
ddddddddddddgZdZddlZddlZy#ddlmZ	e	dd�Z
Wnek
r�d �Z
nXdZdZ
dZdZdZdZdZdZdefd!��YZdefd"��YZdefd#��YZd$efd%��YZd	eefd&��YZd'efd(��YZd)eefd*��YZd
efd+��YZd,efd-��YZdefd.��YZdefd/��YZ d
eefd0��YZ!deee fd1��YZ"eeee!ee"ee gZ#iee6ee6ee6ee6Z$yddl%Z%WnBek
r�ddl&Z&d2e'fd3��YZ(e(�Z%[&[(nXye%j)WnGe*k
r�e+e%j,�d4�r�e%j,�`-nd5�Z.d6�Z/nCXe%j)�Z)e+e)d4�r e)`-ne)d7�Z/e)d8�Z.[%[)e0d9�Z1de'fd:��YZ2e3d;�Z4ej5j6e2�d<e'fd=��YZ7de'fd>��YZ8d?e'fd@��YZ9dAdB�Z:idCdD6dEdF6dGdH6dGdI6dJdK6dJdL6dJdM6dJdN6dAdO6dAdP6dAdQ6dAdR6dAdS6dAdT6dAdU6dAdV6dW�Z;dX�Z<dY�Z=dZ�Z>d[�Z?d\d]�Z@d^�ZAd_�ZBd`e'fda��YZCeC�jDZEd\db�ZFdc�ZGdd�ZHi	dedF6dfdH6dgdI6dhdK6didL6djdM6dkdN6dldO6dmdP6dn�ZIe3e3do�ZJe8dpdqdredsee!egdtgdudvdwdxdydJ�ZKe8dpdzdre
dsee!eee"gdtg�ZLe8dpdzdredsgdtg�ZMddlNZNeNjOd{eNjPeNjQBeNjRB�jSZTeNjOd|�jSZUeNjOd}�jSZVeNjOd~eNjP�ZW[NyddlXZYWnek
r@nXe0d�ZZd��Z[d��Z\dJd��Z]d��Z^d��Z_e2d��Z`e2d��Zae2d��Zbe2dA�Zce2dJ�Zde2d�Zee`eafZfegd�krddlhZhddl&Z&ehjie&jjeg�ndS(�s�	
This is a Py2.3 implementation of decimal floating point arithmetic based on
the General Decimal Arithmetic Specification:

    http://speleotrove.com/decimal/decarith.html

and IEEE standard 854-1987:

    http://en.wikipedia.org/wiki/IEEE_854-1987

Decimal floating point has finite precision with arbitrarily large bounds.

The purpose of this module is to support arithmetic using familiar
"schoolhouse" rules and to avoid some of the tricky representation
issues associated with binary floating point.  The package is especially
useful for financial applications or for contexts where users have
expectations that are at odds with binary floating point (for instance,
in binary floating point, 1.00 % 0.1 gives 0.09999999999999995 instead
of the expected Decimal('0.00') returned by decimal floating point).

Here are some examples of using the decimal module:

>>> from decimal import *
>>> setcontext(ExtendedContext)
>>> Decimal(0)
Decimal('0')
>>> Decimal('1')
Decimal('1')
>>> Decimal('-.0123')
Decimal('-0.0123')
>>> Decimal(123456)
Decimal('123456')
>>> Decimal('123.45e12345678901234567890')
Decimal('1.2345E+12345678901234567892')
>>> Decimal('1.33') + Decimal('1.27')
Decimal('2.60')
>>> Decimal('12.34') + Decimal('3.87') - Decimal('18.41')
Decimal('-2.20')
>>> dig = Decimal(1)
>>> print dig / Decimal(3)
0.333333333
>>> getcontext().prec = 18
>>> print dig / Decimal(3)
0.333333333333333333
>>> print dig.sqrt()
1
>>> print Decimal(3).sqrt()
1.73205080756887729
>>> print Decimal(3) ** 123
4.85192780976896427E+58
>>> inf = Decimal(1) / Decimal(0)
>>> print inf
Infinity
>>> neginf = Decimal(-1) / Decimal(0)
>>> print neginf
-Infinity
>>> print neginf + inf
NaN
>>> print neginf * inf
-Infinity
>>> print dig / 0
Infinity
>>> getcontext().traps[DivisionByZero] = 1
>>> print dig / 0
Traceback (most recent call last):
  ...
  ...
  ...
DivisionByZero: x / 0
>>> c = Context()
>>> c.traps[InvalidOperation] = 0
>>> print c.flags[InvalidOperation]
0
>>> c.divide(Decimal(0), Decimal(0))
Decimal('NaN')
>>> c.traps[InvalidOperation] = 1
>>> print c.flags[InvalidOperation]
1
>>> c.flags[InvalidOperation] = 0
>>> print c.flags[InvalidOperation]
0
>>> print c.divide(Decimal(0), Decimal(0))
Traceback (most recent call last):
  ...
  ...
  ...
InvalidOperation: 0 / 0
>>> print c.flags[InvalidOperation]
1
>>> c.flags[InvalidOperation] = 0
>>> c.traps[InvalidOperation] = 0
>>> print c.divide(Decimal(0), Decimal(0))
NaN
>>> print c.flags[InvalidOperation]
1
>>>
tDecimaltContexttDefaultContexttBasicContexttExtendedContexttDecimalExceptiontClampedtInvalidOperationtDivisionByZerotInexacttRoundedt	SubnormaltOverflowt	Underflowt
ROUND_DOWNt
ROUND_HALF_UPtROUND_HALF_EVENt
ROUND_CEILINGtROUND_FLOORtROUND_UPtROUND_HALF_DOWNt
ROUND_05UPt
setcontextt
getcontexttlocalcontexts1.70i����N(t
namedtupletDecimalTuplessign digits exponentcGs|S(N((targs((s/usr/lib64/python2.7/decimal.pyt<lambda>�tcBseZdZd�ZRS(s1Base exception class.

    Used exceptions derive from this.
    If an exception derives from another exception besides this (such as
    Underflow (Inexact, Rounded, Subnormal) that indicates that it is only
    called if the others are present.  This isn't actually used for
    anything, though.

    handle  -- Called when context._raise_error is called and the
               trap_enabler is not set.  First argument is self, second is the
               context.  More arguments can be given, those being after
               the explanation in _raise_error (For example,
               context._raise_error(NewError, '(-x)!', self._sign) would
               call NewError().handle(context, self._sign).)

    To define a new exception, it should be sufficient to have it derive
    from DecimalException.
    cGsdS(N((tselftcontextR((s/usr/lib64/python2.7/decimal.pythandle�s(t__name__t
__module__t__doc__R (((s/usr/lib64/python2.7/decimal.pyR�scBseZdZRS(s)Exponent of a 0 changed to fit bounds.

    This occurs and signals clamped if the exponent of a result has been
    altered in order to fit the constraints of a specific concrete
    representation.  This may occur when the exponent of a zero result would
    be outside the bounds of a representation, or when a large normal
    number would have an encoded exponent that cannot be represented.  In
    this latter case, the exponent is reduced to fit and the corresponding
    number of zero digits are appended to the coefficient ("fold-down").
    (R!R"R#(((s/usr/lib64/python2.7/decimal.pyR�s
cBseZdZd�ZRS(s0An invalid operation was performed.

    Various bad things cause this:

    Something creates a signaling NaN
    -INF + INF
    0 * (+-)INF
    (+-)INF / (+-)INF
    x % 0
    (+-)INF % x
    x._rescale( non-integer )
    sqrt(-x) , x > 0
    0 ** 0
    x ** (non-integer)
    x ** (+-)INF
    An operand is invalid

    The result of the operation after these is a quiet positive NaN,
    except when the cause is a signaling NaN, in which case the result is
    also a quiet NaN, but with the original sign, and an optional
    diagnostic information.
    cGs:|r6t|dj|djdt�}|j|�StS(Nitn(t_dec_from_triplet_signt_inttTruet_fix_nant_NaN(RRRtans((s/usr/lib64/python2.7/decimal.pyR �s#
(R!R"R#R (((s/usr/lib64/python2.7/decimal.pyR�stConversionSyntaxcBseZdZd�ZRS(s�Trying to convert badly formed string.

    This occurs and signals invalid-operation if a string is being
    converted to a number and it does not conform to the numeric string
    syntax.  The result is [0,qNaN].
    cGstS(N(R*(RRR((s/usr/lib64/python2.7/decimal.pyR �s(R!R"R#R (((s/usr/lib64/python2.7/decimal.pyR,�scBseZdZd�ZRS(s�Division by 0.

    This occurs and signals division-by-zero if division of a finite number
    by zero was attempted (during a divide-integer or divide operation, or a
    power operation with negative right-hand operand), and the dividend was
    not zero.

    The result of the operation is [sign,inf], where sign is the exclusive
    or of the signs of the operands for divide, or is 1 for an odd power of
    -0, for power.
    cGst|S(N(t_SignedInfinity(RRtsignR((s/usr/lib64/python2.7/decimal.pyR �s(R!R"R#R (((s/usr/lib64/python2.7/decimal.pyR�stDivisionImpossiblecBseZdZd�ZRS(s�Cannot perform the division adequately.

    This occurs and signals invalid-operation if the integer result of a
    divide-integer or remainder operation had too many digits (would be
    longer than precision).  The result is [0,qNaN].
    cGstS(N(R*(RRR((s/usr/lib64/python2.7/decimal.pyR s(R!R"R#R (((s/usr/lib64/python2.7/decimal.pyR/�stDivisionUndefinedcBseZdZd�ZRS(s�Undefined result of division.

    This occurs and signals invalid-operation if division by zero was
    attempted (during a divide-integer, divide, or remainder operation), and
    the dividend is also zero.  The result is [0,qNaN].
    cGstS(N(R*(RRR((s/usr/lib64/python2.7/decimal.pyR 
s(R!R"R#R (((s/usr/lib64/python2.7/decimal.pyR0scBseZdZRS(s�Had to round, losing information.

    This occurs and signals inexact whenever the result of an operation is
    not exact (that is, it needed to be rounded and any discarded digits
    were non-zero), or if an overflow or underflow condition occurs.  The
    result in all cases is unchanged.

    The inexact signal may be tested (or trapped) to determine if a given
    operation (or sequence of operations) was inexact.
    (R!R"R#(((s/usr/lib64/python2.7/decimal.pyR	s
tInvalidContextcBseZdZd�ZRS(s�Invalid context.  Unknown rounding, for example.

    This occurs and signals invalid-operation if an invalid context was
    detected during an operation.  This can occur if contexts are not checked
    on creation and either the precision exceeds the capability of the
    underlying concrete representation or an unknown or unsupported rounding
    was specified.  These aspects of the context need only be checked when
    the values are required to be used.  The result is [0,qNaN].
    cGstS(N(R*(RRR((s/usr/lib64/python2.7/decimal.pyR 's(R!R"R#R (((s/usr/lib64/python2.7/decimal.pyR1s	cBseZdZRS(s�Number got rounded (not  necessarily changed during rounding).

    This occurs and signals rounded whenever the result of an operation is
    rounded (that is, some zero or non-zero digits were discarded from the
    coefficient), or if an overflow or underflow condition occurs.  The
    result in all cases is unchanged.

    The rounded signal may be tested (or trapped) to determine if a given
    operation (or sequence of operations) caused a loss of precision.
    (R!R"R#(((s/usr/lib64/python2.7/decimal.pyR
*s
cBseZdZRS(s�Exponent < Emin before rounding.

    This occurs and signals subnormal whenever the result of a conversion or
    operation is subnormal (that is, its adjusted exponent is less than
    Emin, before any rounding).  The result in all cases is unchanged.

    The subnormal signal may be tested (or trapped) to determine if a given
    or operation (or sequence of operations) yielded a subnormal result.
    (R!R"R#(((s/usr/lib64/python2.7/decimal.pyR6s	cBseZdZd�ZRS(sNumerical overflow.

    This occurs and signals overflow if the adjusted exponent of a result
    (from a conversion or from an operation that is not an attempt to divide
    by zero), after rounding, would be greater than the largest value that
    can be handled by the implementation (the value Emax).

    The result depends on the rounding mode:

    For round-half-up and round-half-even (and for round-half-down and
    round-up, if implemented), the result of the operation is [sign,inf],
    where sign is the sign of the intermediate result.  For round-down, the
    result is the largest finite number that can be represented in the
    current precision, with the sign of the intermediate result.  For
    round-ceiling, the result is the same as for round-down if the sign of
    the intermediate result is 1, or is [0,inf] otherwise.  For round-floor,
    the result is the same as for round-down if the sign of the intermediate
    result is 0, or is [1,inf] otherwise.  In all cases, Inexact and Rounded
    will also be raised.
    cGs�|jttttfkr#t|S|dkrk|jtkrFt|St|d|j|j	|jd�S|dkr�|jt
kr�t|St|d|j|j	|jd�SdS(Nit9i(troundingRRRRR-RR%tprectEmaxR(RRR.R((s/usr/lib64/python2.7/decimal.pyR Ws(R!R"R#R (((s/usr/lib64/python2.7/decimal.pyRAscBseZdZRS(sxNumerical underflow with result rounded to 0.

    This occurs and signals underflow if a result is inexact and the
    adjusted exponent of the result would be smaller (more negative) than
    the smallest value that can be handled by the implementation (the value
    Emin).  That is, the result is both inexact and subnormal.

    The result after an underflow will be a subnormal number rounded, if
    necessary, so that its exponent is not less than Etiny.  This may result
    in 0 with the sign of the intermediate result and an exponent of Etiny.

    In all cases, Inexact, Rounded, and Subnormal will also be raised.
    (R!R"R#(((s/usr/lib64/python2.7/decimal.pyR
gs
t
MockThreadingcBseZed�ZRS(cCs|jtS(N(tmodulesR!(Rtsys((s/usr/lib64/python2.7/decimal.pytlocal�s(R!R"R8R9(((s/usr/lib64/python2.7/decimal.pyR6�st__decimal_context__cCsA|tttfkr.|j�}|j�n|tj�_dS(s%Set this thread's context to context.N(RRRtcopytclear_flagst	threadingt
currentThreadR:(R((s/usr/lib64/python2.7/decimal.pyR�s
cCsBytj�jSWn*tk
r=t�}|tj�_|SXdS(s�Returns this thread's context.

        If this thread does not yet have a context, returns
        a new context and sets this thread's context.
        New contexts are copies of DefaultContext.
        N(R=R>R:tAttributeErrorR(R((s/usr/lib64/python2.7/decimal.pyR�s
	cCs6y|jSWn$tk
r1t�}||_|SXdS(s�Returns this thread's context.

        If this thread does not yet have a context, returns
        a new context and sets this thread's context.
        New contexts are copies of DefaultContext.
        N(R:R?R(t_localR((s/usr/lib64/python2.7/decimal.pyR�s
		cCs;|tttfkr.|j�}|j�n||_dS(s%Set this thread's context to context.N(RRRR;R<R:(RR@((s/usr/lib64/python2.7/decimal.pyR�s
cCs"|dkrt�}nt|�S(s^Return a context manager for a copy of the supplied context

    Uses a copy of the current context if no context is specified
    The returned context manager creates a local decimal context
    in a with statement:
        def sin(x):
             with localcontext() as ctx:
                 ctx.prec += 2
                 # Rest of sin calculation algorithm
                 # uses a precision 2 greater than normal
             return +s  # Convert result to normal precision

         def sin(x):
             with localcontext(ExtendedContext):
                 # Rest of sin calculation algorithm
                 # uses the Extended Context from the
                 # General Decimal Arithmetic Specification
             return +s  # Convert result to normal context

    >>> setcontext(DefaultContext)
    >>> print getcontext().prec
    28
    >>> with localcontext():
    ...     ctx = getcontext()
    ...     ctx.prec += 2
    ...     print ctx.prec
    ...
    30
    >>> with localcontext(ExtendedContext):
    ...     print getcontext().prec
    ...
    9
    >>> print getcontext().prec
    28
    N(tNoneRt_ContextManager(tctx((s/usr/lib64/python2.7/decimal.pyR�s$cBsreZdZd�Zdd�d�Zd�Zee�Zd�Zd	�Z	d�d�d
�Z
d�Zd�Zd
�Z
d�d�Zd�d�Zd�d�Zd�d�Zd�d�Zd�d�Zd�d�Zd�Zd�Zd�Zed�d�Zd�d�Zd�d�Zd�d�Zed�d�Zd�d�ZeZ d�d�Z!d�d�Z"d�d �Z#e#Z$d�d!�Z%d"�Z&d�d#�Z'e%Z(e'Z)d�d$�Z*d�d%�Z+d�d&�Z,d�d'�Z-d�d(�Z.d�d)�Z/d�d*�Z0d+�Z1d,�Z2e2Z3d-�Z4e5e4�Z4d.�Z6e5e6�Z6d/�Z7d0�Z8d1�Z9d2�Z:d3�Z;d4�Z<d5�Z=d6�Z>d7�Z?d8�Z@d9�ZAd:�ZBd;�ZCeDd<e<d=e=d>e>d?e?d@e@dAeAdBeBdCeC�ZEd�dD�ZFd�dE�ZGdF�ZHd�d�dG�ZId�dH�ZJd�dI�ZKd�d�edJ�ZLdK�ZMdL�ZNdM�ZOd�d�dN�ZPd�d�dO�ZQeQZRd�dP�ZSd�dQ�ZTd�dR�ZUdS�ZVdT�ZWdU�ZXd�dV�ZYd�dW�ZZdX�Z[dY�Z\dZ�Z]d[�Z^d\�Z_d�d]�Z`d^�Zad_�Zbd`�Zcda�Zdd�db�Zedc�Zfdd�Zgde�Zhd�df�Zidg�Zjdh�Zkd�di�Zldj�Zmd�dk�Znd�dl�Zodm�Zpdn�Zqd�do�Zrd�dp�Zsd�dq�Ztd�dr�Zud�ds�Zvd�dt�Zwd�du�Zxd�dv�Zyd�dw�Zzd�dx�Z{dy�Z|d�dz�Z}d�d{�Z~d�d|�Zd}�Z�d~�Z�d�Z�d�d�d��Z�RS(�s,Floating point class for decimal arithmetic.t_expR'R&t_is_specialt0cCs�tj|�}t|t�r�t|j��}|dkrh|dkrTt�}n|jt	d|�S|j
d�dkr�d|_n	d|_|j
d�}|dk	r|j
d�p�d}t|j
d	�p�d
�}t
t||��|_|t|�|_t|_n�|j
d�}|dk	r{t
t|p?d
��jd
�|_|j
d�rod
|_q�d|_nd
|_d|_t|_|St|ttf�r�|dkr�d|_n	d|_d|_t
t|��|_t|_|St|t�r>|j|_|j|_|j|_|j|_|St|t�r�|j|_t
|j�|_t|j�|_t|_|St|ttf�r^t|�dkr�td��nt|dttf�o�|ddks�td��n|d|_|ddkr7d
|_|d|_t|_n#g}	xt|dD]h}
t|
ttf�r�d|
kozdknr�|	s�|
dkr�|	j|
�q�qHtd��qHW|ddkr�djt t
|	��|_|d|_t|_nbt|dttf�rNdjt t
|	p)dg��|_|d|_t|_ntd��|St|t!�r�tj"|�}|j|_|j|_|j|_|j|_|St#d|��dS(s�Create a decimal point instance.

        >>> Decimal('3.14')              # string input
        Decimal('3.14')
        >>> Decimal((0, (3, 1, 4), -2))  # tuple (sign, digit_tuple, exponent)
        Decimal('3.14')
        >>> Decimal(314)                 # int or long
        Decimal('314')
        >>> Decimal(Decimal(314))        # another decimal instance
        Decimal('314')
        >>> Decimal('  3.14  \n')        # leading and trailing whitespace okay
        Decimal('3.14')
        sInvalid literal for Decimal: %rR.t-iitinttfracRtexpRFtdiagtsignaltNR$tFistInvalid tuple size in creation of Decimal from list or tuple.  The list or tuple should have exactly three elements.s|Invalid sign.  The first value in the tuple should be an integer; either 0 for a positive number or 1 for a negative number.ii	sTThe second value in the tuple must be composed of integers in the range 0 through 9.sUThe third value in the tuple must be an integer, or one of the strings 'F', 'n', 'N'.sCannot convert %r to DecimalN(ii(R$RM($tobjectt__new__t
isinstancet
basestringt_parsertstripRARt_raise_errorR,tgroupR&RHtstrR'tlenRDtFalseREtlstripR(tlongtabsRt_WorkRepR.RJtlistttuplet
ValueErrortappendtjointmaptfloatt
from_floatt	TypeError(tclstvalueRRtmtintparttfracpartRJRKtdigitstdigit((s/usr/lib64/python2.7/decimal.pyRPs�		$							)
	
1
$
cCs�t|ttf�r||�Stj|�s=tj|�rM|t|��Stjd|�dkrnd}nd}t|�j	�\}}|j
�d}t|t|d|�|�}|t
kr�|S||�SdS(s.Converts a float to a decimal number, exactly.

        Note that Decimal.from_float(0.1) is not the same as Decimal('0.1').
        Since 0.1 is not exactly representable in binary floating point, the
        value is stored as the nearest representable value which is
        0x1.999999999999ap-4.  The exact equivalent of the value in decimal
        is 0.1000000000000000055511151231257827021181583404541015625.

        >>> Decimal.from_float(0.1)
        Decimal('0.1000000000000000055511151231257827021181583404541015625')
        >>> Decimal.from_float(float('nan'))
        Decimal('NaN')
        >>> Decimal.from_float(float('inf'))
        Decimal('Infinity')
        >>> Decimal.from_float(-float('inf'))
        Decimal('-Infinity')
        >>> Decimal.from_float(-0.0)
        Decimal('-0')

        g�?iiiN(RQRHR[t_mathtisinftisnantreprtcopysignR\tas_integer_ratiot
bit_lengthR%RWR(RgtfR.R$tdtktresult((s/usr/lib64/python2.7/decimal.pyRe�s
	!cCs9|jr5|j}|dkr"dS|dkr5dSndS(srReturns whether the number is not actually one.

        0 if a number
        1 if NaN
        2 if sNaN
        R$iRMii(RERD(RRJ((s/usr/lib64/python2.7/decimal.pyt_isnan�s		cCs$|jdkr |jrdSdSdS(syReturns whether the number is infinite

        0 if finite or not a number
        1 if +INF
        -1 if -INF
        RNi����ii(RDR&(R((s/usr/lib64/python2.7/decimal.pyt_isinfinity�s
	cCs�|j�}|dkr!t}n|j�}|s9|r�|dkrQt�}n|dkrp|jtd|�S|dkr�|jtd|�S|r�|j|�S|j|�SdS(s�Returns whether the number is not actually one.

        if self, other are sNaN, signal
        if self, other are NaN return nan
        return 0

        Done before operations.
        itsNaNiN(RyRARYRRURR)(RtotherRtself_is_nantother_is_nan((s/usr/lib64/python2.7/decimal.pyt_check_nans�s"
	

cCs�|dkrt�}n|js*|jr�|j�rI|jtd|�S|j�rh|jtd|�S|j�r�|jtd|�S|j�r�|jtd|�SndS(sCVersion of _check_nans used for the signaling comparisons
        compare_signal, __le__, __lt__, __ge__, __gt__.

        Signal InvalidOperation if either self or other is a (quiet
        or signaling) NaN.  Signaling NaNs take precedence over quiet
        NaNs.

        Return 0 if neither operand is a NaN.

        scomparison involving sNaNscomparison involving NaNiN(RARREtis_snanRURtis_qnan(RR|R((s/usr/lib64/python2.7/decimal.pyt_compare_check_nans�s(				
cCs|jp|jdkS(suReturn True if self is nonzero; otherwise return False.

        NaNs and infinities are considered nonzero.
        RF(RER'(R((s/usr/lib64/python2.7/decimal.pyt__nonzero__scCsd|js|jrQ|j�}|j�}||kr:dS||krJdSdSn|sp|sadSd|jSn|s�d|jS|j|jkr�dS|j|jkr�dS|j�}|j�}||kr=|jd|j|j}|jd|j|j}||krdS||kr/d|jSd|jSn#||krTd|jSd|jSdS(s�Compare the two non-NaN decimal instances self and other.

        Returns -1 if self < other, 0 if self == other and 1
        if self > other.  This routine is for internal use only.ii����iRFN(RERzR&tadjustedR'RD(RR|tself_inft	other_inft
self_adjustedtother_adjustedtself_paddedtother_padded((s/usr/lib64/python2.7/decimal.pyt_cmps>cCsKt|dt�}|tkr"|S|j||�r8tS|j|�dkS(Ntallow_floati(t_convert_otherR(tNotImplementedRRYR�(RR|R((s/usr/lib64/python2.7/decimal.pyt__eq___scCsKt|dt�}|tkr"|S|j||�r8tS|j|�dkS(NR�i(R�R(R�RR�(RR|R((s/usr/lib64/python2.7/decimal.pyt__ne__gscCsQt|dt�}|tkr"|S|j||�}|r>tS|j|�dkS(NR�i(R�R(R�R�RYR�(RR|RR+((s/usr/lib64/python2.7/decimal.pyt__lt__oscCsQt|dt�}|tkr"|S|j||�}|r>tS|j|�dkS(NR�i(R�R(R�R�RYR�(RR|RR+((s/usr/lib64/python2.7/decimal.pyt__le__xscCsQt|dt�}|tkr"|S|j||�}|r>tS|j|�dkS(NR�i(R�R(R�R�RYR�(RR|RR+((s/usr/lib64/python2.7/decimal.pyt__gt__�scCsQt|dt�}|tkr"|S|j||�}|r>tS|j|�dkS(NR�i(R�R(R�R�RYR�(RR|RR+((s/usr/lib64/python2.7/decimal.pyt__ge__�scCs\t|dt�}|js*|rI|jrI|j||�}|rI|Snt|j|��S(s�Compares one to another.

        -1 => a < b
        0  => a = b
        1  => a > b
        NaN => one is NaN
        Like __cmp__, but returns Decimal instances.
        traiseit(R�R(RERRR�(RR|RR+((s/usr/lib64/python2.7/decimal.pytcompare�s	cCs�|jrH|j�r$td��qH|j�r4dS|jrAdSdSnt|�}tj|�|krst|�S|j	�r�t
|j��}td|j|j
td|jd��St|j|jt|j�|jjd
�f�S(
sx.__hash__() <==> hash(x)s"Cannot hash a signaling NaN value.ii,��i/�i����i
ii@iRFll����(RER�Rftis_nanR&RdRRethasht
_isintegerR]tto_integral_valueR.RHtpowRJRDRXR'trstrip(Rt
self_as_floattop((s/usr/lib64/python2.7/decimal.pyt__hash__�s"
		
+	cCs(t|jttt|j��|j�S(seRepresents the number as a triple tuple.

        To show the internals exactly as they are.
        (RR&R_RcRHR'RD(R((s/usr/lib64/python2.7/decimal.pytas_tuple�scCsdt|�S(s0Represents the number as an instance of Decimal.s
Decimal('%s')(RW(R((s/usr/lib64/python2.7/decimal.pyt__repr__�sc	Cs�ddg|j}|jrc|jdkr3|dS|jdkrQ|d|jS|d|jSn|jt|j�}|jdkr�|d	kr�|}nE|s�d
}n6|jdkr�|d
dd
}n|d
dd
}|dkr
d}d
d||j}nZ|t|j�krI|jd|t|j�}d}n|j| }d
|j|}||kr|d}n7|dkr�t�}nddg|jd||}||||S(s�Return string representation of the number in scientific notation.

        Captures all of the information in the underlying representation.
        RRGRNtInfinityR$tNaNR{ii����iRFit.tetEs%+dN(R&RERDR'RXRARtcapitals(	RtengRR.t
leftdigitstdotplaceRjRkRJ((s/usr/lib64/python2.7/decimal.pyt__str__�s:				
	cCs|jdtd|�S(s,Convert to a string, using engineering notation if an exponent is needed.

        Engineering notation has an exponent which is a multiple of 3.  This
        can leave up to 3 digits to the left of the decimal place and may
        require the addition of either one or two trailing zeros.
        R�R(R�R((RR((s/usr/lib64/python2.7/decimal.pyt
to_eng_stringscCs~|jr(|jd|�}|r(|Sn|dkr@t�}n|re|jtkre|j�}n|j�}|j|�S(sRReturns a copy with the sign switched.

        Rounds, if it has reason.
        RN(	RERRARR3Rtcopy_abstcopy_negatet_fix(RRR+((s/usr/lib64/python2.7/decimal.pyt__neg__#s	cCs~|jr(|jd|�}|r(|Sn|dkr@t�}n|re|jtkre|j�}nt|�}|j|�S(shReturns a copy, unless it is a sNaN.

        Rounds the number (if more than precision digits)
        RN(	RERRARR3RR�RR�(RRR+((s/usr/lib64/python2.7/decimal.pyt__pos__9s	cCsl|s|j�S|jr8|jd|�}|r8|Sn|jrV|jd|�}n|jd|�}|S(s�Returns the absolute value of self.

        If the keyword argument 'round' is false, do not round.  The
        expression self.__abs__(round=False) is equivalent to
        self.copy_abs().
        R(R�RERR&R�R�(RtroundRR+((s/usr/lib64/python2.7/decimal.pyt__abs__Ns
		c
Csqt|�}|tkr|S|dkr4t�}n|jsF|jr�|j||�}|rb|S|j�r�|j|jkr�|j�r�|jt	d�St
|�S|j�r�t
|�Snt|j|j�}d}|j
tkr|j|jkrd}n|r[|r[t|j|j�}|r6d}nt|d|�}|j|�}|S|s�t||j|jd�}|j||j
�}|j|�}|S|s�t||j|jd�}|j||j
�}|j|�}|St|�}t|�}t|||j�\}}t�}	|j|jkr�|j|jkrvt|d|�}|j|�}|S|j|jkr�||}}n|jdkr�d|	_|j|j|_|_qd|	_n6|jdkrd|	_d\|_|_n	d|	_|jdkr3|j|j|	_n|j|j|	_|j|	_t
|	�}|j|�}|S(sbReturns self + other.

        -INF + INF (or the reverse) cause InvalidOperation errors.
        s
-INF + INFiiRFN(ii(R�R�RARRERRzR&RURRtminRDR3RR%R�tmaxR4t_rescaleR]t
_normalizeR.RHRJ(
RR|RR+RJtnegativezeroR.top1top2Rx((s/usr/lib64/python2.7/decimal.pyt__add__ds|

!						cCsit|�}|tkr|S|js.|jrP|j|d|�}|rP|Sn|j|j�d|�S(sReturn self - otherR(R�R�RERR�R�(RR|RR+((s/usr/lib64/python2.7/decimal.pyt__sub__�scCs/t|�}|tkr|S|j|d|�S(sReturn other - selfR(R�R�R�(RR|R((s/usr/lib64/python2.7/decimal.pyt__rsub__�scCs�t|�}|tkr|S|dkr4t�}n|j|jA}|jsV|jr�|j||�}|rr|S|j�r�|s�|jt	d�St
|S|j�r�|s�|jt	d�St
|Sn|j|j}|s�|rt|d|�}|j
|�}|S|jdkrCt||j|�}|j
|�}|S|jdkrzt||j|�}|j
|�}|St|�}t|�}t|t|j|j�|�}|j
|�}|S(s\Return self * other.

        (+-) INF * 0 (or its reverse) raise InvalidOperation.
        s(+-)INF * 0s0 * (+-)INFRFt1N(R�R�RARR&RERRzRURR-RDR%R�R'R]RWRH(RR|Rt
resultsignR+t	resultexpR�R�((s/usr/lib64/python2.7/decimal.pyt__mul__�sH"cCslt|�}|tkrtS|d
kr4t�}n|j|jA}|jsV|jr�|j||�}|rr|S|j�r�|j�r�|jt	d�S|j�r�t
|S|j�r�|jtd�t|d|j
��Sn|s|s�|jtd�S|jtd|�S|s1|j|j}d}nt|j�t|j�|jd}|j|j|}t|�}t|�}	|dkr�t|jd||	j�\}}
n$t|j|	jd|�\}}
|
r|d	dkrG|d7}qGnG|j|j}x4||krF|ddkrF|d}|d7}qWt|t|�|�}|j|�S(sReturn self / other.s(+-)INF/(+-)INFsDivision by infinityRFs0 / 0sx / 0iii
iN(R�R�RARR&RERRzRURR-RR%tEtinyR0RRDRXR'R4R]tdivmodRHRWR�(RR|RR.R+RJtcoefftshiftR�R�t	remaindert	ideal_exp((s/usr/lib64/python2.7/decimal.pyt__truediv__sP	'&$
cCs�|j|jA}|j�r(|j}nt|j|j�}|j�|j�}|sr|j�sr|dkr�t|dd�|j||j�fS||jkrot	|�}t	|�}|j
|j
kr�|jd|j
|j
9_n|jd|j
|j
9_t|j|j�\}}	|d|jkrot|t
|�d�t|jt
|	�|�fSn|jtd�}
|
|
fS(s�Return (self // other, self % other), to context.prec precision.

        Assumes that neither self nor other is a NaN, that self is not
        infinite and that other is nonzero.
        i����RFii
s%quotient too large in //, % or divmod(R&RzRDR�R�R%R�R3R4R]RJRHR�RWRUR/(RR|RR.R�texpdiffR�R�tqtrR+((s/usr/lib64/python2.7/decimal.pyt_divideFs* 		cCs/t|�}|tkr|S|j|d|�S(s)Swaps self/other and returns __truediv__.R(R�R�R�(RR|R((s/usr/lib64/python2.7/decimal.pyt__rtruediv__gscCs8t|�}|tkr|S|dkr4t�}n|j||�}|rV||fS|j|jA}|j�r�|j�r�|jtd�}||fSt	||jtd�fSn|s|s�|jt
d�}||fS|jtd|�|jtd�fSn|j||�\}}|j
|�}||fS(s6
        Return (self // other, self % other)
        sdivmod(INF, INF)sINF % xsdivmod(0, 0)sx // 0sx % 0N(R�R�RARRR&RzRURR-R0RR�R�(RR|RR+R.tquotientR�((s/usr/lib64/python2.7/decimal.pyt
__divmod__qs0


cCs/t|�}|tkr|S|j|d|�S(s(Swaps self/other and returns __divmod__.R(R�R�R�(RR|R((s/usr/lib64/python2.7/decimal.pyt__rdivmod__�scCs�t|�}|tkr|S|dkr4t�}n|j||�}|rP|S|j�rl|jtd�S|s�|r�|jtd�S|jtd�Sn|j	||�d}|j
|�}|S(s
        self % other
        sINF % xsx % 0s0 % 0iN(R�R�RARRRzRURR0R�R�(RR|RR+R�((s/usr/lib64/python2.7/decimal.pyt__mod__�s"cCs/t|�}|tkr|S|j|d|�S(s%Swaps self/other and returns __mod__.R(R�R�R�(RR|R((s/usr/lib64/python2.7/decimal.pyt__rmod__�scCs||dkrt�}nt|dt�}|j||�}|rF|S|j�rb|jtd�S|s�|r~|jtd�S|jtd�Sn|j�r�t	|�}|j
|�St|j|j�}|s�t
|jd|�}|j
|�S|j�|j�}||jdkr)|jt�S|dkrW|j||j�}|j
|�St|�}t|�}|j|jkr�|jd|j|j9_n|jd|j|j9_t|j|j�\}}	d	|	|d@|jkr|	|j8}	|d7}n|d|jkr.|jt�S|j}
|	d
krWd|
}
|	}	nt
|
t|	�|�}|j
|�S(sI
        Remainder nearest to 0-  abs(remainder-near) <= other/2
        R�sremainder_near(infinity, x)sremainder_near(x, 0)sremainder_near(0, 0)RFii����i
iiN(RARR�R(RRzRURR0RR�R�RDR%R&R�R4R/R�R3R]RJRHR�RW(RR|RR+tideal_exponentR�R�R�R�R�R.((s/usr/lib64/python2.7/decimal.pytremainder_near�sZ			




 


	

cCs�t|�}|tkr|S|dkr4t�}n|j||�}|rP|S|j�r�|j�rx|jtd�St|j	|j	ASn|s�|r�|jt
d|j	|j	A�S|jtd�Sn|j||�dS(s
self // others
INF // INFsx // 0s0 // 0iN(
R�R�RARRRzRURR-R&RR0R�(RR|RR+((s/usr/lib64/python2.7/decimal.pyt__floordiv__	s$cCs/t|�}|tkr|S|j|d|�S(s*Swaps self/other and returns __floordiv__.R(R�R�R�(RR|R((s/usr/lib64/python2.7/decimal.pyt
__rfloordiv__%scCsU|j�r?|j�r'td��n|jr6dnd}nt|�}t|�S(sFloat representation.s%Cannot convert signaling NaN to floats-nantnan(RyR�R`R&RWRd(Rts((s/usr/lib64/python2.7/decimal.pyt	__float__,scCs�|jrB|j�r$td��qB|j�rBtd��qBnd|j}|jdkrz|t|j�d|jS|t|j|j p�d�SdS(s1Converts self to an int, truncating if necessary.sCannot convert NaN to integers"Cannot convert infinity to integeri����ii
RFN(	RERyR`Rzt
OverflowErrorR&RDRHR'(RR�((s/usr/lib64/python2.7/decimal.pyt__int__6s	
cCs|S(N((R((s/usr/lib64/python2.7/decimal.pytrealEscCs
td�S(Ni(R(R((s/usr/lib64/python2.7/decimal.pytimagIscCs|S(N((R((s/usr/lib64/python2.7/decimal.pyt	conjugateMscCstt|��S(N(tcomplexRd(R((s/usr/lib64/python2.7/decimal.pyt__complex__PscCst|j��S(sCConverts to a long.

        Equivalent to long(int(self))
        (R[R�(R((s/usr/lib64/python2.7/decimal.pyt__long__SscCsk|j}|j|j}t|�|kra|t|�|jd�}t|j||jt�St	|�S(s2Decapitate the payload of a NaN to fit the contextRF(
R'R4t_clampRXRZR%R&RDR(R(RRtpayloadtmax_payload_len((s/usr/lib64/python2.7/decimal.pyR)Zs	cCs/|jr/|j�r"|j|�St|�Sn|j�}|j�}|s�|j|g|j}tt	|j
|�|�}||j
kr�|jt�t
|jd|�St|�Snt|j�|j
|j}||kr|jtd|j�}|jt�|jt�|S||k}|r4|}n|j
|kr�t|j�|j
|}	|	dkr�t
|jd|d�}d}	n|j|j}
|
||	�}|j|	 p�d}|dkrtt|�d�}t|�|jkr|d }|d7}qn||kr5|jtd|j�}nt
|j||�}|rf|rf|jt�n|r||jt�n|r�|jt�n|jt�|s�|jt�n|S|r�|jt�n|jdkr%|j
|kr%|jt�|jd|j
|}
t
|j|
|�St|�S(s�Round if it is necessary to keep self within prec precision.

        Rounds and fixes the exponent.  Does not raise on a sNaN.

        Arguments:
        self - Decimal instance
        context - context used.
        RFs
above EmaxiR�ii����(RERyR)RR�tEtopR5R�R�R�RDRURR%R&RXR'R4RR	R
t_pick_rounding_functionR3RWRHR
R(RRR�R�texp_maxtnew_exptexp_minR+tself_is_subnormalRltrounding_methodtchangedR�R�((s/usr/lib64/python2.7/decimal.pyR�fsn
	





		


cCst|j|�rdSdSdS(s(Also known as round-towards-0, truncate.ii����N(t
_all_zerosR'(RR4((s/usr/lib64/python2.7/decimal.pyt_round_down�scCs|j|�S(sRounds away from 0.(R�(RR4((s/usr/lib64/python2.7/decimal.pyt	_round_up�scCs5|j|dkrdSt|j|�r-dSdSdS(sRounds 5 up (away from 0)t56789iii����N(R'R�(RR4((s/usr/lib64/python2.7/decimal.pyt_round_half_up�s
cCs't|j|�rdS|j|�SdS(sRound 5 downi����N(t_exact_halfR'R�(RR4((s/usr/lib64/python2.7/decimal.pyt_round_half_down�scCsJt|j|�r9|dks5|j|ddkr9dS|j|�SdS(s!Round 5 to even, rest to nearest.iit02468i����N(R�R'R�(RR4((s/usr/lib64/python2.7/decimal.pyt_round_half_even�s#cCs(|jr|j|�S|j|�SdS(s(Rounds up (not away from 0 if negative.)N(R&R�(RR4((s/usr/lib64/python2.7/decimal.pyt_round_ceiling�s	
cCs(|js|j|�S|j|�SdS(s'Rounds down (not towards 0 if negative)N(R&R�(RR4((s/usr/lib64/python2.7/decimal.pyt_round_floor�s	
cCs<|r*|j|ddkr*|j|�S|j|�SdS(s)Round down unless digit prec-1 is 0 or 5.it05N(R'R�(RR4((s/usr/lib64/python2.7/decimal.pyt_round_05up�s
RRRRRRRRcCs�t|dt�}|js$|jr+|dkr<t�}n|jdkr^|jtd|�S|jdkr�|jtd|�S|jdkr�|}qm|jdkr�|}qm|jdkr�|s�|jtd�St|j	|j	A}qm|jdkrm|s|jtd�St|j	|j	A}qmnBt
|j	|j	Att|j
�t|j
��|j|j�}t|dt�}|j||�S(	s:Fused multiply-add.

        Returns self*other+third with no rounding of the intermediate
        product self*other.

        self and other are multiplied together, with no rounding of
        the result.  The third operand is then added to the result,
        and a single final rounding is performed.
        R�RMR{R$RNsINF * 0 in fmas0 * INF in fmaN(R�R(RERARRDRURR-R&R%RWRHR'R�(RR|tthirdRtproduct((s/usr/lib64/python2.7/decimal.pytfmas6				cCszt|dt�}t|dt�}|dkr<t�}n|j�}|j�}|j�}|sr|sr|r|dkr�|jtd|�S|dkr�|jtd|�S|dkr�|jtd|�S|r�|j|�S|r�|j|�S|j|�S|j�o#|j�o#|j�s6|jtd�S|dkrR|jtd�S|sh|jtd�S|j	�|j
kr�|jtd�S|r�|r�|jtd	�S|j�r�d}n	|j}t
t|��}t|j��}t|j��}	|j|td
|j|�|}x)t|	j�D]}
t|d
|�}q3Wt||	j|�}t|t|�d�S(s!Three argument version of __pow__R�iR{s@pow() 3rd argument not allowed unless all arguments are integersisApow() 2nd argument cannot be negative when 3rd argument specifiedspow() 3rd argument cannot be 0sSinsufficient precision: pow() 3rd argument must not have more than precision digitssXat least one of pow() 1st argument and 2nd argument must be nonzero; 0**0 is not definedi
N(R�R(RARRyRURR)R�R�R4t_isevenR&R\RHR]R�R�RJtxrangeR%RW(RR|tmoduloRR}R~t
modulo_is_nanR.tbasetexponentti((s/usr/lib64/python2.7/decimal.pyt
_power_modulo;sd


							$cCsEt|�}|j|j}}x(|ddkrI|d}|d7}q"Wt|�}|j|j}}x(|ddkr�|d}|d7}qlW|dkrv||9}x(|ddkr�|d}|d7}q�W|dkr�dS|d|}	|jdkr|	}	n|j�rT|jdkrT|jt|�}
t|	|
|d�}nd}t	ddd||	|�S|jdkry|d}|dkrI||@|kr�dSt
|�d}
|d
d}|tt|��kr�dSt
|
||�}
t
|||�}|
dks(|dkr,dS|
|kr<dSd|
}n�|dkr@t
|�d
d}
td|
|�\}}|r�dSx(|ddkr�|d}|
d8}
q�W|dd}|tt|��kr�dSt
|
||�}
t
|||�}|
dks|dkr#dS|
|kr3dSd|
}ndS|d|krXdS|
|}t	dt|�|�S|dkr�|d|d}}n|dkr�ttt||���|kr�dSt
|�}|dkrttt|�|��|krdS|d|}}x<|d|dkoCdknr_|d}|d}q$Wx<|d|dko�dknr�|d}|d}qcW|dkrw|dkr�||kr�dSt||�\}}|dkr�dSdt
|�|>}xMtrQt|||d�\}}||kr8Pq||d||}qW||kog|dksndS|}n|dkr�||dt|�kr�dS||}||9}|d|kr�dSt|�}|j�r#|jdkr#|jt|�}
t||
|t|��}nd}t	d|d|||�S(shAttempt to compute self**other exactly.

        Given Decimals self and other and an integer p, attempt to
        compute an exact result for the power self**other, with p
        digits of precision.  Return None if self**other is not
        exactly representable in p digits.

        Assumes that elimination of special cases has already been
        performed: self and other must both be nonspecial; self must
        be positive and not numerically equal to 1; other must be
        nonzero.  For efficiency, other._exp should not be too large,
        so that 10**abs(other._exp) is a feasible calculation.i
iiR�RFiiiii]iAiiilidN(iiii(R]RHRJRAR.R�R&RDR�R%t_nbitsRXRWt_decimal_lshift_exactR�R\R(t	_log10_lb(RR|tptxtxctxetytyctyeRR�tzerost
last_digitR�temaxR�RiR$txc_bitstremtaR�R�tstr_xc((s/usr/lib64/python2.7/decimal.pyt_power_exact�s�:








//'
'
		&

 cCs�|dk	r|j|||�St|�}|tkr;|S|dkrSt�}n|j||�}|ro|S|s�|s�|jtd�StSnd}|j	dkr�|j
�r�|j�s�d}q�n|r�|jtd�S|j�}n|s |j	dkrt
|dd�St|Sn|j�rV|j	dkrCt|St
|dd�Sn|tkr-|j
�r�|j	dkr�d}n'||jkr�|j}nt|�}|j|}|d|jkrd|j}|jt�qn'|jt�|jt�d|j}t
|dd||�S|j�}|j�r{|j	dk|dkkrpt
|dd�St|Snd}t}	|j�|j�}
|dk|j	dkkr�|
tt|j��kr0t
|d|jd�}q0n>|j�}|
tt|��kr0t
|d|d�}n|dkr�|j||jd�}|dk	r�|dkr�t
d|j|j�}nt}	q�n|dkr�|j}t|�}
|
j|
j }}t|�}|j|j }}|j!dkr|}nd}x`trht"||||||�\}}|dd	tt|��|dr[Pn|d7}q	Wt
|t|�|�}n|	r�|j
�r�t|j�|jkr�|jdt|j�}t
|j	|jd||j|�}n|j#�}|j$�xt%D]}d|j&|<qW|j'|�}|jt�|j(t)r`|jt*�n|j(t+r�|jt+d
|j	�nxLt*t)ttt,fD]#}|j(|r�|j|�q�q�Wn|j'|�}|S(sHReturn self ** other [ % modulo].

        With two arguments, compute self**other.

        With three arguments, compute (self**other) % modulo.  For the
        three argument form, the following restrictions on the
        arguments hold:

         - all three arguments must be integral
         - other must be nonnegative
         - either self or other (or both) must be nonzero
         - modulo must be nonzero and must have at most p digits,
           where p is the context precision.

        If any of these restrictions is violated the InvalidOperation
        flag is raised.

        The result of pow(self, other, modulo) is identical to the
        result that would be obtained by computing (self**other) %
        modulo with unbounded precision, but is computed more
        efficiently.  It is always exact.
        s0 ** 0iis+x ** y with x negative and y not an integerRFR�iii
s
above EmaxN(-RARR�R�RRRURt_OneR&R�R�R�R%R-RzR4RHRDR
R	R�RYt_log10_exp_boundRXRWR5R�RR'R(R]RJR.t_dpowerR;R<t_signalsttrapsR�tflagsRR
RR(RR|R�RR+tresult_signt
multiplierRJtself_adjtexacttboundR�RR	R
RRR
RtextraR�R�t
newcontextt	exception((s/usr/lib64/python2.7/decimal.pyt__pow__|s�		




	
	"&





cCs/t|�}|tkr|S|j|d|�S(s%Swaps self/other and returns __pow__.R(R�R�R%(RR|R((s/usr/lib64/python2.7/decimal.pyt__rpow__T	scCs|dkrt�}n|jr@|jd|�}|r@|Sn|j|�}|j�r_|S|sxt|jdd�S|j|j	�g|j
}t|j�}|j
}x;|j|ddkr�||kr�|d7}|d8}q�Wt|j|j| |�S(s?Normalize- strip trailing 0s, change anything equal to 0 to 0e0RRFiiN(RARRERR�RzR%R&R5R�R�RXR'RD(RRR+tdupR�tendRJ((s/usr/lib64/python2.7/decimal.pyt	normalize[	s$		&
cCs�t|dt�}|dkr*t�}n|dkrB|j}n|jsT|jr�|j||�}|rp|S|j�s�|j�r�|j�r�|j�r�t|�S|j	t
d�Sn|s|j|j|�}|j|jkr|j	t
�||kr|j	t�qn|S|j�|jko=|jknsR|j	t
d�S|s}t|jd|j�}|j|�S|j�}||jkr�|j	t
d�S||jd|jkr�|j	t
d�S|j|j|�}|j�|jkr|j	t
d�St|j�|jkr4|j	t
d�S|r_|j�|jkr_|j	t�n|j|jkr�||kr�|j	t�n|j	t
�n|j|�}|S(	s�Quantize self so its exponent is the same as that of exp.

        Similar to self._rescale(exp._exp) but with error checking.
        R�squantize with one INFs)target exponent out of bounds in quantizeRFs9exponent of quantize result too large for current contextis7quantize result has too many digits for current contextN(R�R(RARR3RERRzRRURR�RDR
R	R�R5R%R&R�R�R4RXR'tEminR(RRJR3RtwatchexpR+R�((s/usr/lib64/python2.7/decimal.pytquantizet	sb
	

(	
				cCsbt|dt�}|js$|jrR|j�r<|j�pQ|j�oQ|j�S|j|jkS(s=Return True if self and other have the same exponent; otherwise
        return False.

        If either operand is a special value, the following rules are used:
           * return True if both operands are infinities
           * return True if both operands are NaNs
           * otherwise, return False.
        R�(R�R(RER�tis_infiniteRD(RR|((s/usr/lib64/python2.7/decimal.pytsame_quantum�	s
	cCs|jrt|�S|s,t|jd|�S|j|kr`t|j|jd|j||�St|j�|j|}|dkr�t|jd|d�}d}n|j|}|||�}|j| p�d}|dkr�tt	|�d�}nt|j||�S(ssRescale self so that the exponent is exp, either by padding with zeros
        or by truncating digits, using the given rounding mode.

        Specials are returned without change.  This operation is
        quiet: it raises no flags, and uses no information from the
        context.

        exp = exp to scale to (an integer)
        rounding = rounding mode
        RFiR�i(
RERR%R&RDR'RXR�RWRH(RRJR3Rlt
this_functionR�R�((s/usr/lib64/python2.7/decimal.pyR��	s"	
		
cCs�|dkrtd��n|js+|r5t|�S|j|j�d||�}|j�|j�kr�|j|j�d||�}n|S(s"Round a nonzero, nonspecial Decimal to a fixed number of
        significant figures, using the given rounding mode.

        Infinities, NaNs and zeros are returned unaltered.

        This operation is quiet: it raises no flags, and uses no
        information from the context.

        is'argument should be at least 1 in _roundi(R`RERR�R�(RtplacesR3R+((s/usr/lib64/python2.7/decimal.pyt_round�	s

 #cCs�|jr/|jd|�}|r%|St|�S|jdkrHt|�S|sat|jdd�S|dkryt�}n|dkr�|j}n|j	d|�}||kr�|j
t�n|j
t�|S(sVRounds to a nearby integer.

        If no rounding mode is specified, take the rounding mode from
        the context.  This method raises the Rounded and Inexact flags
        when appropriate.

        See also: to_integral_value, which does exactly the same as
        this method except that it doesn't raise Inexact or Rounded.
        RiRFN(
RERRRDR%R&RARR3R�RUR	R
(RR3RR+((s/usr/lib64/python2.7/decimal.pytto_integral_exact
s$
	


cCs�|dkrt�}n|dkr0|j}n|jr_|jd|�}|rU|St|�S|jdkrxt|�S|jd|�SdS(s@Rounds to the nearest integer, without raising inexact, rounded.RiN(RARR3RERRRDR�(RR3RR+((s/usr/lib64/python2.7/decimal.pyR� 
s	

cCs�|d
krt�}n|jre|jd|�}|r=|S|j�re|jdkret|�Sn|s�t|jd|jd�}|j	|�S|jdkr�|j
td�S|jd}t
|�}|jd?}|jd@r
|jd}t|j�d?d}n |j}t|j�dd?}||}|dkrZ|d|9}t}	n!t|d|�\}}
|
}	||8}d|}x2tr�||}||kr�Pq�||d?}q�W|	o�|||k}	|	r|dkr�|d|}n|d|9}||7}n|d	dkr6|d7}ntdt|�|�}|j�}|jt�}
|j	|�}|
|_|S(sReturn the square root of self.RiRFiissqrt(-x), x > 0i
idiN(RARRERRzR&RR%RDR�RURR4R]RJRHRXR'R(R�RWt
_shallow_copyt
_set_roundingRR3(RRR+R4R�R�tctlR�R R�R$R�R3((s/usr/lib64/python2.7/decimal.pytsqrt3
s`	





	
	

	


	cCst|dt�}|dkr*t�}n|js<|jr�|j�}|j�}|s`|r�|dkr�|dkr�|j|�S|dkr�|dkr�|j|�S|j||�Sn|j|�}|dkr�|j	|�}n|dkr�|}n|}|j|�S(s�Returns the larger value.

        Like max(self, other) except if one is not a number, returns
        NaN (and signals if one is sNaN).  Also rounds.
        R�iii����N(
R�R(RARRERyR�RR�t
compare_total(RR|RtsntonR5R+((s/usr/lib64/python2.7/decimal.pyR��
s&

		cCst|dt�}|dkr*t�}n|js<|jr�|j�}|j�}|s`|r�|dkr�|dkr�|j|�S|dkr�|dkr�|j|�S|j||�Sn|j|�}|dkr�|j	|�}n|dkr�|}n|}|j|�S(s�Returns the smaller value.

        Like min(self, other) except if one is not a number, returns
        NaN (and signals if one is sNaN).  Also rounds.
        R�iii����N(
R�R(RARRERyR�RR�R8(RR|RR9R:R5R+((s/usr/lib64/python2.7/decimal.pyR��
s&

	cCsD|jr
tS|jdkr tS|j|j}|dt|�kS(s"Returns whether self is an integeriRF(RERYRDR(R'RX(Rtrest((s/usr/lib64/python2.7/decimal.pyR��
s	cCs2|s|jdkrtS|jd|jdkS(s:Returns True if self is even.  Assumes self is an integer.ii����R�(RDR(R'(R((s/usr/lib64/python2.7/decimal.pyR��
scCs5y|jt|j�dSWntk
r0dSXdS(s$Return the adjusted exponent of selfiiN(RDRXR'Rf(R((s/usr/lib64/python2.7/decimal.pyR��
s
cCs|S(s�Returns the same Decimal object.

        As we do not have different encodings for the same number, the
        received object already is in its canonical form.
        ((RR((s/usr/lib64/python2.7/decimal.pyt	canonical�
scCsAt|dt�}|j||�}|r.|S|j|d|�S(s�Compares self to the other operand numerically.

        It's pretty much like compare(), but all NaNs signal, with signaling
        NaNs taking precedence over quiet NaNs.
        R�R(R�R(R�R�(RR|RR+((s/usr/lib64/python2.7/decimal.pytcompare_signals
cCs�t|dt�}|jr)|jr)tS|jr@|jr@tS|j}|j�}|j�}|sm|rs||kr�t|j�|jf}t|j�|jf}||kr�|r�tStSn||kr�|r�tStSntS|r0|dkr�tS|dkr
tS|dkrtS|dkrptSqs|dkr@tS|dkrPtS|dkr`tS|dkrstSn||kr�tS||kr�tS|j	|j	kr�|r�tStSn|j	|j	kr�|r�tStSntS(s�Compares self to other using the abstract representations.

        This is not like the standard compare, which use their numerical
        value. Note that a total ordering is defined for all possible abstract
        representations.
        R�ii(
R�R(R&t_NegativeOneRRyRXR't_ZeroRD(RR|R.tself_nant	other_nantself_keyt	other_key((s/usr/lib64/python2.7/decimal.pyR8
sf	cCs7t|dt�}|j�}|j�}|j|�S(s�Compares self to other using abstract repr., ignoring sign.

        Like compare_total, but with operand's sign ignored and assumed to be 0.
        R�(R�R(R�R8(RR|R�to((s/usr/lib64/python2.7/decimal.pytcompare_total_magVscCstd|j|j|j�S(s'Returns a copy with the sign set to 0. i(R%R'RDRE(R((s/usr/lib64/python2.7/decimal.pyR�ascCsE|jr%td|j|j|j�Std|j|j|j�SdS(s&Returns a copy with the sign inverted.iiN(R&R%R'RDRE(R((s/usr/lib64/python2.7/decimal.pyR�es	cCs1t|dt�}t|j|j|j|j�S(s$Returns self with the sign of other.R�(R�R(R%R&R'RDRE(RR|((s/usr/lib64/python2.7/decimal.pyt	copy_signlscCs�|dkrt�}n|jd|�}|r4|S|j�dkrJtS|sTtS|j�dkrpt|�S|j}|j�}|j	dkr�|t
t|jdd��kr�t
dd|jd�}n�|j	dkr(|t
t|j�dd��kr(t
dd|j�d�}n7|j	dkrj||krjt
ddd|dd|�}n�|j	dkr�||dkr�t
dd|d|d�}n�t|�}|j|j}}|jdkr�|}nd}xZtrFt||||�\}	}
|	d	d
t
t|	��|dr9Pn|d7}q�Wt
dt|	�|
�}|j�}|jt�}|j|�}||_|S(sReturns e ** self.Ri����iiiR�RFR2ii
N(RARRRzR?RRR4R�R&RXRWR5R%R�R]RHRJR.R(t_dexpR3R4RR�R3(RRR+RtadjR�R5R�R"R�RJR3((s/usr/lib64/python2.7/decimal.pyRJrsJ
	26& "
	&	cCstS(s�Return True if self is canonical; otherwise return False.

        Currently, the encoding of a Decimal instance is always
        canonical, so this method returns True for any Decimal.
        (R((R((s/usr/lib64/python2.7/decimal.pytis_canonical�scCs|jS(s�Return True if self is finite; otherwise return False.

        A Decimal instance is considered finite if it is neither
        infinite nor a NaN.
        (RE(R((s/usr/lib64/python2.7/decimal.pyt	is_finite�scCs
|jdkS(s8Return True if self is infinite; otherwise return False.RN(RD(R((s/usr/lib64/python2.7/decimal.pyR-�scCs
|jdkS(s>Return True if self is a qNaN or sNaN; otherwise return False.R$RM(R$RM(RD(R((s/usr/lib64/python2.7/decimal.pyR��scCs?|js|rtS|dkr,t�}n|j|j�kS(s?Return True if self is a normal number; otherwise return False.N(RERYRARR*R�(RR((s/usr/lib64/python2.7/decimal.pyt	is_normal�s
cCs
|jdkS(s;Return True if self is a quiet NaN; otherwise return False.R$(RD(R((s/usr/lib64/python2.7/decimal.pyR��scCs
|jdkS(s8Return True if self is negative; otherwise return False.i(R&(R((s/usr/lib64/python2.7/decimal.pyt	is_signed�scCs
|jdkS(s?Return True if self is a signaling NaN; otherwise return False.RM(RD(R((s/usr/lib64/python2.7/decimal.pyR��scCs?|js|rtS|dkr,t�}n|j�|jkS(s9Return True if self is subnormal; otherwise return False.N(RERYRARR�R*(RR((s/usr/lib64/python2.7/decimal.pytis_subnormal�s
cCs|jo|jdkS(s6Return True if self is a zero; otherwise return False.RF(RER'(R((s/usr/lib64/python2.7/decimal.pytis_zero�scCs�|jt|j�d}|dkrBtt|dd��dS|dkrnttd|dd��dSt|�}|j|j}}|dkr�t|d|�}t|�}t|�t|�||kS|ttd||��dS(s�Compute a lower bound for the adjusted exponent of self.ln().
        In other words, compute r such that self.ln() >= 10**r.  Assumes
        that self is finite and positive and that self != 1.
        iii
i����i����i(RDRXR'RWR]RHRJ(RRHR�R5R�tnumtden((s/usr/lib64/python2.7/decimal.pyt
_ln_exp_bound�s c
Csz|d	krt�}n|jd|�}|r4|S|s>tS|j�dkrTtS|tkrdtS|jdkr�|j	t
d�St|�}|j|j
}}|j}||j�d}xVtrt|||�}|ddttt|���|dr
Pn|d7}q�Wtt|dk�tt|��|�}|j�}|jt�}	|j|�}|	|_|S(
s/Returns the natural (base e) logarithm of self.Risln of a negative valueiii
iiN(RARRt_NegativeInfinityRzt	_InfinityRR?R&RURR]RHRJR4RQR(t_dlogRXRWR\R%R3R4RR�R3(
RRR+R�R5R�RR0R�R3((s/usr/lib64/python2.7/decimal.pytlns:			,+	cCs|jt|j�d}|dkr:tt|��dS|dkr^ttd|��dSt|�}|j|j}}|dkr�t|d|�}td|�}t|�t|�||kdStd||�}t|�||dkdS(	s�Compute a lower bound for the adjusted exponent of self.log10().
        In other words, find r such that self.log10() >= 10**r.
        Assumes that self is finite and positive and that self != 1.
        ii����i����ii
i�it231(RDRXR'RWR]RHRJ(RRHR�R5R�RORP((s/usr/lib64/python2.7/decimal.pyR@s"c
Cs�|dkrt�}n|jd|�}|r4|S|s>tS|j�dkrTtS|jdkrs|jtd�S|j	ddkr�|j	ddt
|j	�dkr�t|jt
|j	�d�}n�t
|�}|j|j}}|j}||j�d}xVtrat|||�}|dd	t
tt|���|drTPn|d
7}qWtt|dk�tt|��|�}|j�}|jt�}	|j|�}|	|_|S(s&Returns the base 10 logarithm of self.Rislog10 of a negative valueiR�RFiii
iN(RARRRRRzRSR&RURR'RXRRDR]RHRJR4RR(t_dlog10RWR\R%R3R4RR�R3(
RRR+R�R5R�RR0R�R3((s/usr/lib64/python2.7/decimal.pytlog10^s:	7#		,+	cCs||jd|�}|r|S|dkr4t�}n|j�rDtS|s]|jtdd�St|j��}|j	|�S(sM Returns the exponent of the magnitude of self's MSD.

        The result is the integer which is the exponent of the magnitude
        of the most significant digit of self (as though it were truncated
        to a single digit while maintaining the value of that digit and
        without limiting the resulting exponent).
        Rslogb(0)iN(
RRARRzRSRURRR�R�(RRR+((s/usr/lib64/python2.7/decimal.pytlogb�s	cCsJ|jdks|jdkr"tSx!|jD]}|dkr,tSq,WtS(s�Return True if self is a logical operand.

        For being logical, it must be a finite number with a sign of 0,
        an exponent of 0, and a coefficient whose digits must all be
        either 0 or 1.
        it01(R&RDRYR'R((Rtdig((s/usr/lib64/python2.7/decimal.pyt
_islogical�scCs�|jt|�}|dkr0d||}n|dkrM||j}n|jt|�}|dkr}d||}n|dkr�||j}n||fS(NiRF(R4RX(RRtopatopbtdif((s/usr/lib64/python2.7/decimal.pyt
_fill_logical�scCs�|dkrt�}nt|dt�}|j�sD|j�rQ|jt�S|j||j|j�\}}dj	gt
||�D](\}}tt|�t|�@�^q��}t
d|jd�p�dd�S(s;Applies an 'and' operation between self and other's digits.R�RiRFN(RARR�R(R\RURR`R'RbtzipRWRHR%RZ(RR|RR]R^RtbRx((s/usr/lib64/python2.7/decimal.pytlogical_and�s
!GcCs;|dkrt�}n|jtdd|jd�|�S(sInvert all its digits.iR�N(RARtlogical_xorR%R4(RR((s/usr/lib64/python2.7/decimal.pytlogical_invert�scCs�|dkrt�}nt|dt�}|j�sD|j�rQ|jt�S|j||j|j�\}}dj	gt
||�D](\}}tt|�t|�B�^q��}t
d|jd�p�dd�S(s:Applies an 'or' operation between self and other's digits.R�RiRFN(RARR�R(R\RURR`R'RbRaRWRHR%RZ(RR|RR]R^RRbRx((s/usr/lib64/python2.7/decimal.pyt
logical_or�s
!GcCs�|dkrt�}nt|dt�}|j�sD|j�rQ|jt�S|j||j|j�\}}dj	gt
||�D](\}}tt|�t|�A�^q��}t
d|jd�p�dd�S(s;Applies an 'xor' operation between self and other's digits.R�RiRFN(RARR�R(R\RURR`R'RbRaRWRHR%RZ(RR|RR]R^RRbRx((s/usr/lib64/python2.7/decimal.pyRd�s
!GcCst|dt�}|dkr*t�}n|js<|jr�|j�}|j�}|s`|r�|dkr�|dkr�|j|�S|dkr�|dkr�|j|�S|j||�Sn|j�j	|j��}|dkr�|j
|�}n|dkr|}n|}|j|�S(s8Compares the values numerically with their sign ignored.R�iii����N(R�R(RARRERyR�RR�R�R8(RR|RR9R:R5R+((s/usr/lib64/python2.7/decimal.pytmax_mag
s&

	cCst|dt�}|dkr*t�}n|js<|jr�|j�}|j�}|s`|r�|dkr�|dkr�|j|�S|dkr�|dkr�|j|�S|j||�Sn|j�j	|j��}|dkr�|j
|�}n|dkr|}n|}|j|�S(s8Compares the values numerically with their sign ignored.R�iii����N(R�R(RARRERyR�RR�R�R8(RR|RR9R:R5R+((s/usr/lib64/python2.7/decimal.pytmin_mag"
s&

	cCs�|dkrt�}n|jd|�}|r4|S|j�dkrJtS|j�dkrytdd|j|j��S|j�}|j	t
�|j�|j|�}||kr�|S|j
tdd|j�d�|�S(s=Returns the largest representable number smaller than itself.Ri����iiR2R�N(RARRRzRRR%R4R�R;R4Rt_ignore_all_flagsR�R�R�(RRR+tnew_self((s/usr/lib64/python2.7/decimal.pyt
next_minus@
s"

cCs�|dkrt�}n|jd|�}|r4|S|j�dkrJtS|j�dkrytdd|j|j��S|j�}|j	t
�|j�|j|�}||kr�|S|j
tdd|j�d�|�S(s=Returns the smallest representable number larger than itself.Rii����R2iR�N(RARRRzRSR%R4R�R;R4RRiR�R�R�(RRR+Rj((s/usr/lib64/python2.7/decimal.pyt	next_plusW
s"

cCs@t|dt�}|dkr*t�}n|j||�}|rF|S|j|�}|dkrn|j|�S|dkr�|j|�}n|j|�}|j	�r�|j
td|j�|j
t
�|j
t�nb|j�|jkr<|j
t�|j
t�|j
t
�|j
t�|s<|j
t�q<n|S(s�Returns the number closest to self, in the direction towards other.

        The result is the closest representable number to self
        (excluding self) that is in the direction towards other,
        unless both have the same value.  If the two operands are
        numerically equal, then the result is a copy of self with the
        sign set to be the same as the sign of other.
        R�ii����s Infinite result from next_towardN(R�R(RARRR�RFRlRkRzRURR&R	R
R�R*R
RR(RR|RR+t
comparison((s/usr/lib64/python2.7/decimal.pytnext_towardn
s4	
	





cCs�|j�rdS|j�r dS|j�}|dkr<dS|dkrLdS|j�rl|jredSdSn|dkr�t�}n|jd	|�r�|jr�d
SdSn|jr�dSd
SdS(sReturns an indication of the class of self.

        The class is one of the following strings:
          sNaN
          NaN
          -Infinity
          -Normal
          -Subnormal
          -Zero
          +Zero
          +Subnormal
          +Normal
          +Infinity
        R{R�is	+Infinityi����s	-Infinitys-Zeros+ZeroRs
-Subnormals
+Subnormals-Normals+NormalN(R�R�RzRNR&RARRM(RRtinf((s/usr/lib64/python2.7/decimal.pytnumber_class�
s,			cCs
td�S(s'Just returns 10, as this is Decimal, :)i
(R(R((s/usr/lib64/python2.7/decimal.pytradix�
scCsD|dkrt�}nt|dt�}|j||�}|rF|S|jdkrb|jt�S|jt	|�ko�|jkns�|jt�S|j
�r�t|�St	|�}|j}|jt
|�}|dkr�d||}n|dkr
||}n|||| }t|j|jd�p:d|j�S(s5Returns a rotated copy of self, value-of-other times.R�iRFN(RARR�R(RRDRURR4RHRzRR'RXR%R&RZ(RR|RR+ttorottrotdigttopadtrotated((s/usr/lib64/python2.7/decimal.pytrotate�
s,
)

		cCs|dkrt�}nt|dt�}|j||�}|rF|S|jdkrb|jt�Sd|j|j	}d|j|j	}|t
|�ko�|kns�|jt�S|j�r�t|�St
|j|j|jt
|��}|j|�}|S(s>Returns self operand after adding the second value to its exp.R�ii����iN(RARR�R(RRDRURR5R4RHRzRR%R&R'R�(RR|RR+tliminftlimsupRv((s/usr/lib64/python2.7/decimal.pytscaleb�
s"
"

%cCsg|dkrt�}nt|dt�}|j||�}|rF|S|jdkrb|jt�S|jt	|�ko�|jkns�|jt�S|j
�r�t|�St	|�}|j}|jt
|�}|dkr�d||}n|dkr
||}n|dkr&|| }n|d|}||j}t|j|jd�p]d|j�S(s5Returns a shifted copy of self, value-of-other times.R�iRFN(RARR�R(RRDRURR4RHRzRR'RXR%R&RZ(RR|RR+RrRsRttshifted((s/usr/lib64/python2.7/decimal.pyR�s2
)

	
	cCs|jt|�ffS(N(t	__class__RW(R((s/usr/lib64/python2.7/decimal.pyt
__reduce__+scCs)t|�tkr|S|jt|��S(N(ttypeRR{RW(R((s/usr/lib64/python2.7/decimal.pyt__copy__.scCs)t|�tkr|S|jt|��S(N(R}RR{RW(Rtmemo((s/usr/lib64/python2.7/decimal.pyt__deepcopy__3scCs|dkrt�}nt|d|�}|jr�t|j|�}t|j��}|ddkrt|d7}nt|||�S|ddkr�ddg|j	|d<n|ddkr�t
|j|j|jd�}n|j
}|d}|dk	r�|ddkr(|j|d	|�}q�|dd
krN|j||�}q�|ddkr�t|j�|kr�|j||�}q�n|r�|jdkr�|dd
kr�|jd|�}n|jt|j�}	|ddkr|r|dk	rd	|}
qkd	}
nV|dd
kr.|	}
n=|ddkrk|jdkrb|	d
krb|	}
qkd	}
n|
dkr�d}d|
|j}n\|
t|j�kr�|jd|
t|j�}d}n |j|
 p�d}|j|
}|	|
}
t|j|||
|�S(s|Format a Decimal instance according to the given specifier.

        The specifier should be a standard format specifier, with the
        form described in PEP 3101.  Formatting types 'e', 'E', 'f',
        'F', 'g', 'G', 'n' and '%' are supported.  If the formatting
        type is omitted it defaults to 'g' or 'G', depending on the
        value of context.capitals.
        t_localeconvR}t%tgtGit	precisionteEisfF%tgGii����RFRN(RARt_parse_format_specifierREt_format_signR&RWR�t
_format_alignR�R%R'RDR3R1R�RXt_format_number(Rt	specifierRR�tspecR.tbodyR3R�R�R�RjRkRJ((s/usr/lib64/python2.7/decimal.pyt
__format__:sZ	
"	
%&
					

(RDR'R&REN(�R!R"R#t	__slots__RARPRetclassmethodRyRzRR�R�R�R�R�R�R�R�R�R�R�R�R�RYR�R�R�R�R(R�R�t__radd__R�R�R�t__rmul__R�R�R�t__div__t__rdiv__R�R�R�R�R�R�R�R�R�t	__trunc__R�tpropertyR�R�R�R�R)R�R�R�R�R�R�R�R�R�tdictR�R�RRR%R&R)R,R.R�R1R2R�tto_integralR7R�R�R�R�R�R<R=R8RER�R�RFRJRIRJR-R�RKR�RLR�RMRNRQRURRXRYR\R`RcReRfRdRgRhRkRlRnRpRqRvRyR�R|R~R�R�(((s/usr/lib64/python2.7/decimal.pyR�s�	$		
 	!		@					4		4	V7;	!$K	
	
							f										,T	��G		"	c*"					I				K									2	3		
.*	!'			cCs7tjt�}||_||_||_||_|S(s�Create a decimal instance directly, without any validation,
    normalization (e.g. removal of leading zeros) or argument
    conversion.

    This function is for *internal use only*.
    (RORPRR&R'RDRE(R.tcoefficientRtspecialR((s/usr/lib64/python2.7/decimal.pyR%�s				RBcBs)eZdZd�Zd�Zd�ZRS(s�Context manager class to support localcontext().

      Sets a copy of the supplied context in __enter__() and restores
      the previous decimal context in __exit__()
    cCs|j�|_dS(N(R;tnew_context(RR�((s/usr/lib64/python2.7/decimal.pyt__init__�scCs t�|_t|j�|jS(N(Rt
saved_contextRR�(R((s/usr/lib64/python2.7/decimal.pyt	__enter__�s
cCst|j�dS(N(RR�(Rtttvttb((s/usr/lib64/python2.7/decimal.pyt__exit__�s(R!R"R#R�R�R�(((s/usr/lib64/python2.7/decimal.pyRB�s		c
Bs�eZdZdNdNdNdNdNdNdNddNd�	Zd�Zd�Zd�Zd�ZeZ	dNd�Z
d�Zd	�Zd
�Z
dNZd�Zd�Zd
�Zdd�Zd�Zd�Zd�Zd�Zd�Zd�Zd�Zd�Zd�Zd�Zd�Zd�Zd�Zd�Z d�Z!d�Z"d �Z#d!�Z$d"�Z%d#�Z&d$�Z'd%�Z(d&�Z)d'�Z*d(�Z+d)�Z,d*�Z-d+�Z.d,�Z/d-�Z0d.�Z1d/�Z2d0�Z3d1�Z4d2�Z5d3�Z6d4�Z7d5�Z8d6�Z9d7�Z:d8�Z;d9�Z<d:�Z=d;�Z>d<�Z?d=�Z@d>�ZAdNd?�ZBd@�ZCdA�ZDdB�ZEdC�ZFdD�ZGdE�ZHdF�ZIdG�ZJdH�ZKdI�ZLdJ�ZMdK�ZNdL�ZOdM�ZPePZQRS(Os�Contains the context for a Decimal instance.

    Contains:
    prec - precision (for use in rounding, division, square roots..)
    rounding - rounding type (how you round)
    traps - If traps[exception] = 1, then the exception is
                    raised when it is caused.  Otherwise, a value is
                    substituted in.
    flags  - When an exception is caused, flags[exception] is set.
             (Whether or not the trap_enabler is set)
             Should be reset by user of Decimal instance.
    Emin -   Minimum exponent
    Emax -   Maximum exponent
    capitals -      If 1, 1*10^1 is printed as 1E+1.
                    If 0, printed as 1e1
    _clamp - If 1, change exponents if too high (Default 0)
    ic
s�y
t}
Wntk
rnX|dk	r0|n|
j|_|dk	rN|n|
j|_|dk	rl|n|
j|_|dk	r�|n|
j|_|dk	r�|n|
j|_|dk	r�|n|
j|_|	dkr�g|_	n	|	|_	�dkr|
j
j�|_
n:t�t
�sEt
�fd�tD��|_
n	�|_
�dkrrt
jtd�|_n:t�t
�s�t
�fd�tD��|_n	�|_dS(Nc3s'|]}|t|�k�fVqdS(N(RH(t.0R�(R(s/usr/lib64/python2.7/decimal.pys	<genexpr>�sic3s'|]}|t|�k�fVqdS(N(RH(R�R�(R(s/usr/lib64/python2.7/decimal.pys	<genexpr>�s(Rt	NameErrorRAR4R3R*R5R�R�t_ignored_flagsRR;RQR�RtfromkeysR(RR4R3RRR*R5R�R�R�tdc((RRs/usr/lib64/python2.7/decimal.pyR��s.

	"	"cCs�g}|jdt|��g|jj�D]\}}|r-|j^q-}|jddj|�d�g|jj�D]\}}|r||j^q|}|jddj|�d�dj|�dS(sShow the current context.saContext(prec=%(prec)d, rounding=%(rounding)s, Emin=%(Emin)d, Emax=%(Emax)d, capitals=%(capitals)dsflags=[s, t]straps=[t)(RatvarsRtitemsR!RbR(RR�RuR�tnamesR�((s/usr/lib64/python2.7/decimal.pyR��s	11cCs%x|jD]}d|j|<q
WdS(sReset all flags to zeroiN(R(Rtflag((s/usr/lib64/python2.7/decimal.pyR<�sc
CsCt|j|j|j|j|j|j|j|j|j	�	}|S(s!Returns a shallow copy from self.(
RR4R3RRR*R5R�R�R�(Rtnc((s/usr/lib64/python2.7/decimal.pyR3�sc
CsOt|j|j|jj�|jj�|j|j|j|j	|j
�	}|S(sReturns a deep copy from self.(RR4R3RR;RR*R5R�R�R�(RR�((s/usr/lib64/python2.7/decimal.pyR;scGsqtj||�}||jkr4|�j||�Sd|j|<|j|sa|�j||�S||��dS(s#Handles an error

        If the flag is in _ignored_flags, returns the default response.
        Otherwise, it sets the flag, then, if the corresponding
        trap_enabler is set, it reraises the exception.  Otherwise, it returns
        the default value after setting the flag.
        iN(t_condition_maptgetR�R RR(Rt	conditiontexplanationRterror((s/usr/lib64/python2.7/decimal.pyRUs

cCs
|jt�S(s$Ignore all flags, if they are raised(t
_ignore_flagsR(R((s/usr/lib64/python2.7/decimal.pyRi"scGs |jt|�|_t|�S(s$Ignore the flags, if they are raised(R�R^(RR((s/usr/lib64/python2.7/decimal.pyR�&scGsQ|r,t|dttf�r,|d}nx|D]}|jj|�q3WdS(s+Stop ignoring the flags, if they are raisediN(RQR_R^R�tremove(RRR�((s/usr/lib64/python2.7/decimal.pyt
_regard_flags-s

cCst|j|jd�S(s!Returns Etiny (= Emin - prec + 1)i(RHR*R4(R((s/usr/lib64/python2.7/decimal.pyR�7scCst|j|jd�S(s,Returns maximum exponent (= Emax - prec + 1)i(RHR5R4(R((s/usr/lib64/python2.7/decimal.pyR�;scCs|j}||_|S(s�Sets the rounding type.

        Sets the rounding type, and returns the current (previous)
        rounding type.  Often used like:

        context = context.copy()
        # so you don't change the calling context
        # if an error occurs in the middle.
        rounding = context._set_rounding(ROUND_UP)
        val = self.__sub__(other, context=context)
        context._set_rounding(rounding)

        This will make it round up for that operation.
        (R3(RR}R3((s/usr/lib64/python2.7/decimal.pyR4?s		RFcCs�t|t�r1||j�kr1|jtd�St|d|�}|j�r~t|j�|j	|j
kr~|jtd�S|j|�S(s�Creates a new Decimal instance but using self as context.

        This method implements the to-number operation of the
        IBM Decimal specification.s/no trailing or leading whitespace is permitted.Rsdiagnostic info too long in NaN(RQRRRTRUR,RRyRXR'R4R�R�(RRORv((s/usr/lib64/python2.7/decimal.pytcreate_decimalRs!	+	cCstj|�}|j|�S(s�Creates a new Decimal instance from a float but rounding using self
        as the context.

        >>> context = Context(prec=5, rounding=ROUND_DOWN)
        >>> context.create_decimal_from_float(3.1415926535897932)
        Decimal('3.1415')
        >>> context = Context(prec=5, traps=[Inexact])
        >>> context.create_decimal_from_float(3.1415926535897932)
        Traceback (most recent call last):
            ...
        Inexact: None

        (RReR�(RRuRv((s/usr/lib64/python2.7/decimal.pytcreate_decimal_from_floatcscCs"t|dt�}|jd|�S(s[Returns the absolute value of the operand.

        If the operand is negative, the result is the same as using the minus
        operation on the operand.  Otherwise, the result is the same as using
        the plus operation on the operand.

        >>> ExtendedContext.abs(Decimal('2.1'))
        Decimal('2.1')
        >>> ExtendedContext.abs(Decimal('-100'))
        Decimal('100')
        >>> ExtendedContext.abs(Decimal('101.5'))
        Decimal('101.5')
        >>> ExtendedContext.abs(Decimal('-101.5'))
        Decimal('101.5')
        >>> ExtendedContext.abs(-1)
        Decimal('1')
        R�R(R�R(R�(RR((s/usr/lib64/python2.7/decimal.pyR\uscCsNt|dt�}|j|d|�}|tkrFtd|��n|SdS(s�Return the sum of the two operands.

        >>> ExtendedContext.add(Decimal('12'), Decimal('7.00'))
        Decimal('19.00')
        >>> ExtendedContext.add(Decimal('1E+2'), Decimal('1.01E+4'))
        Decimal('1.02E+4')
        >>> ExtendedContext.add(1, Decimal(2))
        Decimal('3')
        >>> ExtendedContext.add(Decimal(8), 5)
        Decimal('13')
        >>> ExtendedContext.add(5, 5)
        Decimal('10')
        R�RsUnable to convert %s to DecimalN(R�R(R�R�Rf(RRRbR�((s/usr/lib64/python2.7/decimal.pytadd�s
cCst|j|��S(N(RWR�(RR((s/usr/lib64/python2.7/decimal.pyt_apply�scCs|jd|�S(s�Returns the same Decimal object.

        As we do not have different encodings for the same number, the
        received object already is in its canonical form.

        >>> ExtendedContext.canonical(Decimal('2.50'))
        Decimal('2.50')
        R(R<(RR((s/usr/lib64/python2.7/decimal.pyR<�s	cCs%t|dt�}|j|d|�S(s�Compares values numerically.

        If the signs of the operands differ, a value representing each operand
        ('-1' if the operand is less than zero, '0' if the operand is zero or
        negative zero, or '1' if the operand is greater than zero) is used in
        place of that operand for the comparison instead of the actual
        operand.

        The comparison is then effected by subtracting the second operand from
        the first and then returning a value according to the result of the
        subtraction: '-1' if the result is less than zero, '0' if the result is
        zero or negative zero, or '1' if the result is greater than zero.

        >>> ExtendedContext.compare(Decimal('2.1'), Decimal('3'))
        Decimal('-1')
        >>> ExtendedContext.compare(Decimal('2.1'), Decimal('2.1'))
        Decimal('0')
        >>> ExtendedContext.compare(Decimal('2.1'), Decimal('2.10'))
        Decimal('0')
        >>> ExtendedContext.compare(Decimal('3'), Decimal('2.1'))
        Decimal('1')
        >>> ExtendedContext.compare(Decimal('2.1'), Decimal('-3'))
        Decimal('1')
        >>> ExtendedContext.compare(Decimal('-3'), Decimal('2.1'))
        Decimal('-1')
        >>> ExtendedContext.compare(1, 2)
        Decimal('-1')
        >>> ExtendedContext.compare(Decimal(1), 2)
        Decimal('-1')
        >>> ExtendedContext.compare(1, Decimal(2))
        Decimal('-1')
        R�R(R�R(R�(RRRb((s/usr/lib64/python2.7/decimal.pyR��s!cCs%t|dt�}|j|d|�S(sCompares the values of the two operands numerically.

        It's pretty much like compare(), but all NaNs signal, with signaling
        NaNs taking precedence over quiet NaNs.

        >>> c = ExtendedContext
        >>> c.compare_signal(Decimal('2.1'), Decimal('3'))
        Decimal('-1')
        >>> c.compare_signal(Decimal('2.1'), Decimal('2.1'))
        Decimal('0')
        >>> c.flags[InvalidOperation] = 0
        >>> print c.flags[InvalidOperation]
        0
        >>> c.compare_signal(Decimal('NaN'), Decimal('2.1'))
        Decimal('NaN')
        >>> print c.flags[InvalidOperation]
        1
        >>> c.flags[InvalidOperation] = 0
        >>> print c.flags[InvalidOperation]
        0
        >>> c.compare_signal(Decimal('sNaN'), Decimal('2.1'))
        Decimal('NaN')
        >>> print c.flags[InvalidOperation]
        1
        >>> c.compare_signal(-1, 2)
        Decimal('-1')
        >>> c.compare_signal(Decimal(-1), 2)
        Decimal('-1')
        >>> c.compare_signal(-1, Decimal(2))
        Decimal('-1')
        R�R(R�R(R=(RRRb((s/usr/lib64/python2.7/decimal.pyR=�s cCst|dt�}|j|�S(s+Compares two operands using their abstract representation.

        This is not like the standard compare, which use their numerical
        value. Note that a total ordering is defined for all possible abstract
        representations.

        >>> ExtendedContext.compare_total(Decimal('12.73'), Decimal('127.9'))
        Decimal('-1')
        >>> ExtendedContext.compare_total(Decimal('-127'),  Decimal('12'))
        Decimal('-1')
        >>> ExtendedContext.compare_total(Decimal('12.30'), Decimal('12.3'))
        Decimal('-1')
        >>> ExtendedContext.compare_total(Decimal('12.30'), Decimal('12.30'))
        Decimal('0')
        >>> ExtendedContext.compare_total(Decimal('12.3'),  Decimal('12.300'))
        Decimal('1')
        >>> ExtendedContext.compare_total(Decimal('12.3'),  Decimal('NaN'))
        Decimal('-1')
        >>> ExtendedContext.compare_total(1, 2)
        Decimal('-1')
        >>> ExtendedContext.compare_total(Decimal(1), 2)
        Decimal('-1')
        >>> ExtendedContext.compare_total(1, Decimal(2))
        Decimal('-1')
        R�(R�R(R8(RRRb((s/usr/lib64/python2.7/decimal.pyR8�scCst|dt�}|j|�S(s�Compares two operands using their abstract representation ignoring sign.

        Like compare_total, but with operand's sign ignored and assumed to be 0.
        R�(R�R(RE(RRRb((s/usr/lib64/python2.7/decimal.pyREscCst|dt�}|j�S(sReturns a copy of the operand with the sign set to 0.

        >>> ExtendedContext.copy_abs(Decimal('2.1'))
        Decimal('2.1')
        >>> ExtendedContext.copy_abs(Decimal('-100'))
        Decimal('100')
        >>> ExtendedContext.copy_abs(-1)
        Decimal('1')
        R�(R�R(R�(RR((s/usr/lib64/python2.7/decimal.pyR�s
cCst|dt�}t|�S(sReturns a copy of the decimal object.

        >>> ExtendedContext.copy_decimal(Decimal('2.1'))
        Decimal('2.1')
        >>> ExtendedContext.copy_decimal(Decimal('-1.00'))
        Decimal('-1.00')
        >>> ExtendedContext.copy_decimal(1)
        Decimal('1')
        R�(R�R(R(RR((s/usr/lib64/python2.7/decimal.pytcopy_decimal&s
cCst|dt�}|j�S(s(Returns a copy of the operand with the sign inverted.

        >>> ExtendedContext.copy_negate(Decimal('101.5'))
        Decimal('-101.5')
        >>> ExtendedContext.copy_negate(Decimal('-101.5'))
        Decimal('101.5')
        >>> ExtendedContext.copy_negate(1)
        Decimal('-1')
        R�(R�R(R�(RR((s/usr/lib64/python2.7/decimal.pyR�3s
cCst|dt�}|j|�S(sCopies the second operand's sign to the first one.

        In detail, it returns a copy of the first operand with the sign
        equal to the sign of the second operand.

        >>> ExtendedContext.copy_sign(Decimal( '1.50'), Decimal('7.33'))
        Decimal('1.50')
        >>> ExtendedContext.copy_sign(Decimal('-1.50'), Decimal('7.33'))
        Decimal('1.50')
        >>> ExtendedContext.copy_sign(Decimal( '1.50'), Decimal('-7.33'))
        Decimal('-1.50')
        >>> ExtendedContext.copy_sign(Decimal('-1.50'), Decimal('-7.33'))
        Decimal('-1.50')
        >>> ExtendedContext.copy_sign(1, -2)
        Decimal('-1')
        >>> ExtendedContext.copy_sign(Decimal(1), -2)
        Decimal('-1')
        >>> ExtendedContext.copy_sign(1, Decimal(-2))
        Decimal('-1')
        R�(R�R(RF(RRRb((s/usr/lib64/python2.7/decimal.pyRF@scCsNt|dt�}|j|d|�}|tkrFtd|��n|SdS(s�Decimal division in a specified context.

        >>> ExtendedContext.divide(Decimal('1'), Decimal('3'))
        Decimal('0.333333333')
        >>> ExtendedContext.divide(Decimal('2'), Decimal('3'))
        Decimal('0.666666667')
        >>> ExtendedContext.divide(Decimal('5'), Decimal('2'))
        Decimal('2.5')
        >>> ExtendedContext.divide(Decimal('1'), Decimal('10'))
        Decimal('0.1')
        >>> ExtendedContext.divide(Decimal('12'), Decimal('12'))
        Decimal('1')
        >>> ExtendedContext.divide(Decimal('8.00'), Decimal('2'))
        Decimal('4.00')
        >>> ExtendedContext.divide(Decimal('2.400'), Decimal('2.0'))
        Decimal('1.20')
        >>> ExtendedContext.divide(Decimal('1000'), Decimal('100'))
        Decimal('10')
        >>> ExtendedContext.divide(Decimal('1000'), Decimal('1'))
        Decimal('1000')
        >>> ExtendedContext.divide(Decimal('2.40E+6'), Decimal('2'))
        Decimal('1.20E+6')
        >>> ExtendedContext.divide(5, 5)
        Decimal('1')
        >>> ExtendedContext.divide(Decimal(5), 5)
        Decimal('1')
        >>> ExtendedContext.divide(5, Decimal(5))
        Decimal('1')
        R�RsUnable to convert %s to DecimalN(R�R(R�R�Rf(RRRbR�((s/usr/lib64/python2.7/decimal.pytdivideXs
cCsNt|dt�}|j|d|�}|tkrFtd|��n|SdS(s/Divides two numbers and returns the integer part of the result.

        >>> ExtendedContext.divide_int(Decimal('2'), Decimal('3'))
        Decimal('0')
        >>> ExtendedContext.divide_int(Decimal('10'), Decimal('3'))
        Decimal('3')
        >>> ExtendedContext.divide_int(Decimal('1'), Decimal('0.3'))
        Decimal('3')
        >>> ExtendedContext.divide_int(10, 3)
        Decimal('3')
        >>> ExtendedContext.divide_int(Decimal(10), 3)
        Decimal('3')
        >>> ExtendedContext.divide_int(10, Decimal(3))
        Decimal('3')
        R�RsUnable to convert %s to DecimalN(R�R(R�R�Rf(RRRbR�((s/usr/lib64/python2.7/decimal.pyt
divide_int}s
cCsNt|dt�}|j|d|�}|tkrFtd|��n|SdS(s�Return (a // b, a % b).

        >>> ExtendedContext.divmod(Decimal(8), Decimal(3))
        (Decimal('2'), Decimal('2'))
        >>> ExtendedContext.divmod(Decimal(8), Decimal(4))
        (Decimal('2'), Decimal('0'))
        >>> ExtendedContext.divmod(8, 4)
        (Decimal('2'), Decimal('0'))
        >>> ExtendedContext.divmod(Decimal(8), 4)
        (Decimal('2'), Decimal('0'))
        >>> ExtendedContext.divmod(8, Decimal(4))
        (Decimal('2'), Decimal('0'))
        R�RsUnable to convert %s to DecimalN(R�R(R�R�Rf(RRRbR�((s/usr/lib64/python2.7/decimal.pyR��s
cCs"t|dt�}|jd|�S(s#Returns e ** a.

        >>> c = ExtendedContext.copy()
        >>> c.Emin = -999
        >>> c.Emax = 999
        >>> c.exp(Decimal('-Infinity'))
        Decimal('0')
        >>> c.exp(Decimal('-1'))
        Decimal('0.367879441')
        >>> c.exp(Decimal('0'))
        Decimal('1')
        >>> c.exp(Decimal('1'))
        Decimal('2.71828183')
        >>> c.exp(Decimal('0.693147181'))
        Decimal('2.00000000')
        >>> c.exp(Decimal('+Infinity'))
        Decimal('Infinity')
        >>> c.exp(10)
        Decimal('22026.4658')
        R�R(R�R(RJ(RR((s/usr/lib64/python2.7/decimal.pyRJ�scCs(t|dt�}|j||d|�S(sReturns a multiplied by b, plus c.

        The first two operands are multiplied together, using multiply,
        the third operand is then added to the result of that
        multiplication, using add, all with only one final rounding.

        >>> ExtendedContext.fma(Decimal('3'), Decimal('5'), Decimal('7'))
        Decimal('22')
        >>> ExtendedContext.fma(Decimal('3'), Decimal('-5'), Decimal('7'))
        Decimal('-8')
        >>> ExtendedContext.fma(Decimal('888565290'), Decimal('1557.96930'), Decimal('-86087.7578'))
        Decimal('1.38435736E+12')
        >>> ExtendedContext.fma(1, 3, 4)
        Decimal('7')
        >>> ExtendedContext.fma(1, Decimal(3), 4)
        Decimal('7')
        >>> ExtendedContext.fma(1, 3, Decimal(4))
        Decimal('7')
        R�R(R�R(R�(RRRbR5((s/usr/lib64/python2.7/decimal.pyR��scCs
|j�S(sReturn True if the operand is canonical; otherwise return False.

        Currently, the encoding of a Decimal instance is always
        canonical, so this method returns True for any Decimal.

        >>> ExtendedContext.is_canonical(Decimal('2.50'))
        True
        (RI(RR((s/usr/lib64/python2.7/decimal.pyRI�s	cCst|dt�}|j�S(s,Return True if the operand is finite; otherwise return False.

        A Decimal instance is considered finite if it is neither
        infinite nor a NaN.

        >>> ExtendedContext.is_finite(Decimal('2.50'))
        True
        >>> ExtendedContext.is_finite(Decimal('-0.3'))
        True
        >>> ExtendedContext.is_finite(Decimal('0'))
        True
        >>> ExtendedContext.is_finite(Decimal('Inf'))
        False
        >>> ExtendedContext.is_finite(Decimal('NaN'))
        False
        >>> ExtendedContext.is_finite(1)
        True
        R�(R�R(RJ(RR((s/usr/lib64/python2.7/decimal.pyRJ�scCst|dt�}|j�S(sUReturn True if the operand is infinite; otherwise return False.

        >>> ExtendedContext.is_infinite(Decimal('2.50'))
        False
        >>> ExtendedContext.is_infinite(Decimal('-Inf'))
        True
        >>> ExtendedContext.is_infinite(Decimal('NaN'))
        False
        >>> ExtendedContext.is_infinite(1)
        False
        R�(R�R(R-(RR((s/usr/lib64/python2.7/decimal.pyR-�scCst|dt�}|j�S(sOReturn True if the operand is a qNaN or sNaN;
        otherwise return False.

        >>> ExtendedContext.is_nan(Decimal('2.50'))
        False
        >>> ExtendedContext.is_nan(Decimal('NaN'))
        True
        >>> ExtendedContext.is_nan(Decimal('-sNaN'))
        True
        >>> ExtendedContext.is_nan(1)
        False
        R�(R�R(R�(RR((s/usr/lib64/python2.7/decimal.pyR�s
cCs"t|dt�}|jd|�S(s�Return True if the operand is a normal number;
        otherwise return False.

        >>> c = ExtendedContext.copy()
        >>> c.Emin = -999
        >>> c.Emax = 999
        >>> c.is_normal(Decimal('2.50'))
        True
        >>> c.is_normal(Decimal('0.1E-999'))
        False
        >>> c.is_normal(Decimal('0.00'))
        False
        >>> c.is_normal(Decimal('-Inf'))
        False
        >>> c.is_normal(Decimal('NaN'))
        False
        >>> c.is_normal(1)
        True
        R�R(R�R(RK(RR((s/usr/lib64/python2.7/decimal.pyRKscCst|dt�}|j�S(sHReturn True if the operand is a quiet NaN; otherwise return False.

        >>> ExtendedContext.is_qnan(Decimal('2.50'))
        False
        >>> ExtendedContext.is_qnan(Decimal('NaN'))
        True
        >>> ExtendedContext.is_qnan(Decimal('sNaN'))
        False
        >>> ExtendedContext.is_qnan(1)
        False
        R�(R�R(R�(RR((s/usr/lib64/python2.7/decimal.pyR�/scCst|dt�}|j�S(s�Return True if the operand is negative; otherwise return False.

        >>> ExtendedContext.is_signed(Decimal('2.50'))
        False
        >>> ExtendedContext.is_signed(Decimal('-12'))
        True
        >>> ExtendedContext.is_signed(Decimal('-0'))
        True
        >>> ExtendedContext.is_signed(8)
        False
        >>> ExtendedContext.is_signed(-8)
        True
        R�(R�R(RL(RR((s/usr/lib64/python2.7/decimal.pyRL>scCst|dt�}|j�S(sTReturn True if the operand is a signaling NaN;
        otherwise return False.

        >>> ExtendedContext.is_snan(Decimal('2.50'))
        False
        >>> ExtendedContext.is_snan(Decimal('NaN'))
        False
        >>> ExtendedContext.is_snan(Decimal('sNaN'))
        True
        >>> ExtendedContext.is_snan(1)
        False
        R�(R�R(R�(RR((s/usr/lib64/python2.7/decimal.pyR�Os
cCs"t|dt�}|jd|�S(s�Return True if the operand is subnormal; otherwise return False.

        >>> c = ExtendedContext.copy()
        >>> c.Emin = -999
        >>> c.Emax = 999
        >>> c.is_subnormal(Decimal('2.50'))
        False
        >>> c.is_subnormal(Decimal('0.1E-999'))
        True
        >>> c.is_subnormal(Decimal('0.00'))
        False
        >>> c.is_subnormal(Decimal('-Inf'))
        False
        >>> c.is_subnormal(Decimal('NaN'))
        False
        >>> c.is_subnormal(1)
        False
        R�R(R�R(RM(RR((s/usr/lib64/python2.7/decimal.pyRM_scCst|dt�}|j�S(suReturn True if the operand is a zero; otherwise return False.

        >>> ExtendedContext.is_zero(Decimal('0'))
        True
        >>> ExtendedContext.is_zero(Decimal('2.50'))
        False
        >>> ExtendedContext.is_zero(Decimal('-0E+2'))
        True
        >>> ExtendedContext.is_zero(1)
        False
        >>> ExtendedContext.is_zero(0)
        True
        R�(R�R(RN(RR((s/usr/lib64/python2.7/decimal.pyRNuscCs"t|dt�}|jd|�S(s�Returns the natural (base e) logarithm of the operand.

        >>> c = ExtendedContext.copy()
        >>> c.Emin = -999
        >>> c.Emax = 999
        >>> c.ln(Decimal('0'))
        Decimal('-Infinity')
        >>> c.ln(Decimal('1.000'))
        Decimal('0')
        >>> c.ln(Decimal('2.71828183'))
        Decimal('1.00000000')
        >>> c.ln(Decimal('10'))
        Decimal('2.30258509')
        >>> c.ln(Decimal('+Infinity'))
        Decimal('Infinity')
        >>> c.ln(1)
        Decimal('0')
        R�R(R�R(RU(RR((s/usr/lib64/python2.7/decimal.pyRU�scCs"t|dt�}|jd|�S(s�Returns the base 10 logarithm of the operand.

        >>> c = ExtendedContext.copy()
        >>> c.Emin = -999
        >>> c.Emax = 999
        >>> c.log10(Decimal('0'))
        Decimal('-Infinity')
        >>> c.log10(Decimal('0.001'))
        Decimal('-3')
        >>> c.log10(Decimal('1.000'))
        Decimal('0')
        >>> c.log10(Decimal('2'))
        Decimal('0.301029996')
        >>> c.log10(Decimal('10'))
        Decimal('1')
        >>> c.log10(Decimal('70'))
        Decimal('1.84509804')
        >>> c.log10(Decimal('+Infinity'))
        Decimal('Infinity')
        >>> c.log10(0)
        Decimal('-Infinity')
        >>> c.log10(1)
        Decimal('0')
        R�R(R�R(RX(RR((s/usr/lib64/python2.7/decimal.pyRX�scCs"t|dt�}|jd|�S(s4 Returns the exponent of the magnitude of the operand's MSD.

        The result is the integer which is the exponent of the magnitude
        of the most significant digit of the operand (as though the
        operand were truncated to a single digit while maintaining the
        value of that digit and without limiting the resulting exponent).

        >>> ExtendedContext.logb(Decimal('250'))
        Decimal('2')
        >>> ExtendedContext.logb(Decimal('2.50'))
        Decimal('0')
        >>> ExtendedContext.logb(Decimal('0.03'))
        Decimal('-2')
        >>> ExtendedContext.logb(Decimal('0'))
        Decimal('-Infinity')
        >>> ExtendedContext.logb(1)
        Decimal('0')
        >>> ExtendedContext.logb(10)
        Decimal('1')
        >>> ExtendedContext.logb(100)
        Decimal('2')
        R�R(R�R(RY(RR((s/usr/lib64/python2.7/decimal.pyRY�scCs%t|dt�}|j|d|�S(s�Applies the logical operation 'and' between each operand's digits.

        The operands must be both logical numbers.

        >>> ExtendedContext.logical_and(Decimal('0'), Decimal('0'))
        Decimal('0')
        >>> ExtendedContext.logical_and(Decimal('0'), Decimal('1'))
        Decimal('0')
        >>> ExtendedContext.logical_and(Decimal('1'), Decimal('0'))
        Decimal('0')
        >>> ExtendedContext.logical_and(Decimal('1'), Decimal('1'))
        Decimal('1')
        >>> ExtendedContext.logical_and(Decimal('1100'), Decimal('1010'))
        Decimal('1000')
        >>> ExtendedContext.logical_and(Decimal('1111'), Decimal('10'))
        Decimal('10')
        >>> ExtendedContext.logical_and(110, 1101)
        Decimal('100')
        >>> ExtendedContext.logical_and(Decimal(110), 1101)
        Decimal('100')
        >>> ExtendedContext.logical_and(110, Decimal(1101))
        Decimal('100')
        R�R(R�R(Rc(RRRb((s/usr/lib64/python2.7/decimal.pyRc�scCs"t|dt�}|jd|�S(sInvert all the digits in the operand.

        The operand must be a logical number.

        >>> ExtendedContext.logical_invert(Decimal('0'))
        Decimal('111111111')
        >>> ExtendedContext.logical_invert(Decimal('1'))
        Decimal('111111110')
        >>> ExtendedContext.logical_invert(Decimal('111111111'))
        Decimal('0')
        >>> ExtendedContext.logical_invert(Decimal('101010101'))
        Decimal('10101010')
        >>> ExtendedContext.logical_invert(1101)
        Decimal('111110010')
        R�R(R�R(Re(RR((s/usr/lib64/python2.7/decimal.pyRe�scCs%t|dt�}|j|d|�S(s�Applies the logical operation 'or' between each operand's digits.

        The operands must be both logical numbers.

        >>> ExtendedContext.logical_or(Decimal('0'), Decimal('0'))
        Decimal('0')
        >>> ExtendedContext.logical_or(Decimal('0'), Decimal('1'))
        Decimal('1')
        >>> ExtendedContext.logical_or(Decimal('1'), Decimal('0'))
        Decimal('1')
        >>> ExtendedContext.logical_or(Decimal('1'), Decimal('1'))
        Decimal('1')
        >>> ExtendedContext.logical_or(Decimal('1100'), Decimal('1010'))
        Decimal('1110')
        >>> ExtendedContext.logical_or(Decimal('1110'), Decimal('10'))
        Decimal('1110')
        >>> ExtendedContext.logical_or(110, 1101)
        Decimal('1111')
        >>> ExtendedContext.logical_or(Decimal(110), 1101)
        Decimal('1111')
        >>> ExtendedContext.logical_or(110, Decimal(1101))
        Decimal('1111')
        R�R(R�R(Rf(RRRb((s/usr/lib64/python2.7/decimal.pyRfscCs%t|dt�}|j|d|�S(s�Applies the logical operation 'xor' between each operand's digits.

        The operands must be both logical numbers.

        >>> ExtendedContext.logical_xor(Decimal('0'), Decimal('0'))
        Decimal('0')
        >>> ExtendedContext.logical_xor(Decimal('0'), Decimal('1'))
        Decimal('1')
        >>> ExtendedContext.logical_xor(Decimal('1'), Decimal('0'))
        Decimal('1')
        >>> ExtendedContext.logical_xor(Decimal('1'), Decimal('1'))
        Decimal('0')
        >>> ExtendedContext.logical_xor(Decimal('1100'), Decimal('1010'))
        Decimal('110')
        >>> ExtendedContext.logical_xor(Decimal('1111'), Decimal('10'))
        Decimal('1101')
        >>> ExtendedContext.logical_xor(110, 1101)
        Decimal('1011')
        >>> ExtendedContext.logical_xor(Decimal(110), 1101)
        Decimal('1011')
        >>> ExtendedContext.logical_xor(110, Decimal(1101))
        Decimal('1011')
        R�R(R�R(Rd(RRRb((s/usr/lib64/python2.7/decimal.pyRdscCs%t|dt�}|j|d|�S(s�max compares two values numerically and returns the maximum.

        If either operand is a NaN then the general rules apply.
        Otherwise, the operands are compared as though by the compare
        operation.  If they are numerically equal then the left-hand operand
        is chosen as the result.  Otherwise the maximum (closer to positive
        infinity) of the two operands is chosen as the result.

        >>> ExtendedContext.max(Decimal('3'), Decimal('2'))
        Decimal('3')
        >>> ExtendedContext.max(Decimal('-10'), Decimal('3'))
        Decimal('3')
        >>> ExtendedContext.max(Decimal('1.0'), Decimal('1'))
        Decimal('1')
        >>> ExtendedContext.max(Decimal('7'), Decimal('NaN'))
        Decimal('7')
        >>> ExtendedContext.max(1, 2)
        Decimal('2')
        >>> ExtendedContext.max(Decimal(1), 2)
        Decimal('2')
        >>> ExtendedContext.max(1, Decimal(2))
        Decimal('2')
        R�R(R�R(R�(RRRb((s/usr/lib64/python2.7/decimal.pyR�6scCs%t|dt�}|j|d|�S(s�Compares the values numerically with their sign ignored.

        >>> ExtendedContext.max_mag(Decimal('7'), Decimal('NaN'))
        Decimal('7')
        >>> ExtendedContext.max_mag(Decimal('7'), Decimal('-10'))
        Decimal('-10')
        >>> ExtendedContext.max_mag(1, -2)
        Decimal('-2')
        >>> ExtendedContext.max_mag(Decimal(1), -2)
        Decimal('-2')
        >>> ExtendedContext.max_mag(1, Decimal(-2))
        Decimal('-2')
        R�R(R�R(Rg(RRRb((s/usr/lib64/python2.7/decimal.pyRgQscCs%t|dt�}|j|d|�S(s�min compares two values numerically and returns the minimum.

        If either operand is a NaN then the general rules apply.
        Otherwise, the operands are compared as though by the compare
        operation.  If they are numerically equal then the left-hand operand
        is chosen as the result.  Otherwise the minimum (closer to negative
        infinity) of the two operands is chosen as the result.

        >>> ExtendedContext.min(Decimal('3'), Decimal('2'))
        Decimal('2')
        >>> ExtendedContext.min(Decimal('-10'), Decimal('3'))
        Decimal('-10')
        >>> ExtendedContext.min(Decimal('1.0'), Decimal('1'))
        Decimal('1.0')
        >>> ExtendedContext.min(Decimal('7'), Decimal('NaN'))
        Decimal('7')
        >>> ExtendedContext.min(1, 2)
        Decimal('1')
        >>> ExtendedContext.min(Decimal(1), 2)
        Decimal('1')
        >>> ExtendedContext.min(1, Decimal(29))
        Decimal('1')
        R�R(R�R(R�(RRRb((s/usr/lib64/python2.7/decimal.pyR�bscCs%t|dt�}|j|d|�S(s�Compares the values numerically with their sign ignored.

        >>> ExtendedContext.min_mag(Decimal('3'), Decimal('-2'))
        Decimal('-2')
        >>> ExtendedContext.min_mag(Decimal('-3'), Decimal('NaN'))
        Decimal('-3')
        >>> ExtendedContext.min_mag(1, -2)
        Decimal('1')
        >>> ExtendedContext.min_mag(Decimal(1), -2)
        Decimal('1')
        >>> ExtendedContext.min_mag(1, Decimal(-2))
        Decimal('1')
        R�R(R�R(Rh(RRRb((s/usr/lib64/python2.7/decimal.pyRh}scCs"t|dt�}|jd|�S(s�Minus corresponds to unary prefix minus in Python.

        The operation is evaluated using the same rules as subtract; the
        operation minus(a) is calculated as subtract('0', a) where the '0'
        has the same exponent as the operand.

        >>> ExtendedContext.minus(Decimal('1.3'))
        Decimal('-1.3')
        >>> ExtendedContext.minus(Decimal('-1.3'))
        Decimal('1.3')
        >>> ExtendedContext.minus(1)
        Decimal('-1')
        R�R(R�R(R�(RR((s/usr/lib64/python2.7/decimal.pytminus�scCsNt|dt�}|j|d|�}|tkrFtd|��n|SdS(s�multiply multiplies two operands.

        If either operand is a special value then the general rules apply.
        Otherwise, the operands are multiplied together
        ('long multiplication'), resulting in a number which may be as long as
        the sum of the lengths of the two operands.

        >>> ExtendedContext.multiply(Decimal('1.20'), Decimal('3'))
        Decimal('3.60')
        >>> ExtendedContext.multiply(Decimal('7'), Decimal('3'))
        Decimal('21')
        >>> ExtendedContext.multiply(Decimal('0.9'), Decimal('0.8'))
        Decimal('0.72')
        >>> ExtendedContext.multiply(Decimal('0.9'), Decimal('-0'))
        Decimal('-0.0')
        >>> ExtendedContext.multiply(Decimal('654321'), Decimal('654321'))
        Decimal('4.28135971E+11')
        >>> ExtendedContext.multiply(7, 7)
        Decimal('49')
        >>> ExtendedContext.multiply(Decimal(7), 7)
        Decimal('49')
        >>> ExtendedContext.multiply(7, Decimal(7))
        Decimal('49')
        R�RsUnable to convert %s to DecimalN(R�R(R�R�Rf(RRRbR�((s/usr/lib64/python2.7/decimal.pytmultiply�s
cCs"t|dt�}|jd|�S(s"Returns the largest representable number smaller than a.

        >>> c = ExtendedContext.copy()
        >>> c.Emin = -999
        >>> c.Emax = 999
        >>> ExtendedContext.next_minus(Decimal('1'))
        Decimal('0.999999999')
        >>> c.next_minus(Decimal('1E-1007'))
        Decimal('0E-1007')
        >>> ExtendedContext.next_minus(Decimal('-1.00000003'))
        Decimal('-1.00000004')
        >>> c.next_minus(Decimal('Infinity'))
        Decimal('9.99999999E+999')
        >>> c.next_minus(1)
        Decimal('0.999999999')
        R�R(R�R(Rk(RR((s/usr/lib64/python2.7/decimal.pyRk�scCs"t|dt�}|jd|�S(sReturns the smallest representable number larger than a.

        >>> c = ExtendedContext.copy()
        >>> c.Emin = -999
        >>> c.Emax = 999
        >>> ExtendedContext.next_plus(Decimal('1'))
        Decimal('1.00000001')
        >>> c.next_plus(Decimal('-1E-1007'))
        Decimal('-0E-1007')
        >>> ExtendedContext.next_plus(Decimal('-1.00000003'))
        Decimal('-1.00000002')
        >>> c.next_plus(Decimal('-Infinity'))
        Decimal('-9.99999999E+999')
        >>> c.next_plus(1)
        Decimal('1.00000001')
        R�R(R�R(Rl(RR((s/usr/lib64/python2.7/decimal.pyRl�scCs%t|dt�}|j|d|�S(s�Returns the number closest to a, in direction towards b.

        The result is the closest representable number from the first
        operand (but not the first operand) that is in the direction
        towards the second operand, unless the operands have the same
        value.

        >>> c = ExtendedContext.copy()
        >>> c.Emin = -999
        >>> c.Emax = 999
        >>> c.next_toward(Decimal('1'), Decimal('2'))
        Decimal('1.00000001')
        >>> c.next_toward(Decimal('-1E-1007'), Decimal('1'))
        Decimal('-0E-1007')
        >>> c.next_toward(Decimal('-1.00000003'), Decimal('0'))
        Decimal('-1.00000002')
        >>> c.next_toward(Decimal('1'), Decimal('0'))
        Decimal('0.999999999')
        >>> c.next_toward(Decimal('1E-1007'), Decimal('-100'))
        Decimal('0E-1007')
        >>> c.next_toward(Decimal('-1.00000003'), Decimal('-10'))
        Decimal('-1.00000004')
        >>> c.next_toward(Decimal('0.00'), Decimal('-0.0000'))
        Decimal('-0.00')
        >>> c.next_toward(0, 1)
        Decimal('1E-1007')
        >>> c.next_toward(Decimal(0), 1)
        Decimal('1E-1007')
        >>> c.next_toward(0, Decimal(1))
        Decimal('1E-1007')
        R�R(R�R(Rn(RRRb((s/usr/lib64/python2.7/decimal.pyRn�s cCs"t|dt�}|jd|�S(s�normalize reduces an operand to its simplest form.

        Essentially a plus operation with all trailing zeros removed from the
        result.

        >>> ExtendedContext.normalize(Decimal('2.1'))
        Decimal('2.1')
        >>> ExtendedContext.normalize(Decimal('-2.0'))
        Decimal('-2')
        >>> ExtendedContext.normalize(Decimal('1.200'))
        Decimal('1.2')
        >>> ExtendedContext.normalize(Decimal('-120'))
        Decimal('-1.2E+2')
        >>> ExtendedContext.normalize(Decimal('120.00'))
        Decimal('1.2E+2')
        >>> ExtendedContext.normalize(Decimal('0.00'))
        Decimal('0')
        >>> ExtendedContext.normalize(6)
        Decimal('6')
        R�R(R�R(R)(RR((s/usr/lib64/python2.7/decimal.pyR)
scCs"t|dt�}|jd|�S(s�Returns an indication of the class of the operand.

        The class is one of the following strings:
          -sNaN
          -NaN
          -Infinity
          -Normal
          -Subnormal
          -Zero
          +Zero
          +Subnormal
          +Normal
          +Infinity

        >>> c = Context(ExtendedContext)
        >>> c.Emin = -999
        >>> c.Emax = 999
        >>> c.number_class(Decimal('Infinity'))
        '+Infinity'
        >>> c.number_class(Decimal('1E-10'))
        '+Normal'
        >>> c.number_class(Decimal('2.50'))
        '+Normal'
        >>> c.number_class(Decimal('0.1E-999'))
        '+Subnormal'
        >>> c.number_class(Decimal('0'))
        '+Zero'
        >>> c.number_class(Decimal('-0'))
        '-Zero'
        >>> c.number_class(Decimal('-0.1E-999'))
        '-Subnormal'
        >>> c.number_class(Decimal('-1E-10'))
        '-Normal'
        >>> c.number_class(Decimal('-2.50'))
        '-Normal'
        >>> c.number_class(Decimal('-Infinity'))
        '-Infinity'
        >>> c.number_class(Decimal('NaN'))
        'NaN'
        >>> c.number_class(Decimal('-NaN'))
        'NaN'
        >>> c.number_class(Decimal('sNaN'))
        'sNaN'
        >>> c.number_class(123)
        '+Normal'
        R�R(R�R(Rp(RR((s/usr/lib64/python2.7/decimal.pyRp"s/cCs"t|dt�}|jd|�S(s�Plus corresponds to unary prefix plus in Python.

        The operation is evaluated using the same rules as add; the
        operation plus(a) is calculated as add('0', a) where the '0'
        has the same exponent as the operand.

        >>> ExtendedContext.plus(Decimal('1.3'))
        Decimal('1.3')
        >>> ExtendedContext.plus(Decimal('-1.3'))
        Decimal('-1.3')
        >>> ExtendedContext.plus(-1)
        Decimal('-1')
        R�R(R�R(R�(RR((s/usr/lib64/python2.7/decimal.pytplusTscCsQt|dt�}|j||d|�}|tkrItd|��n|SdS(sRaises a to the power of b, to modulo if given.

        With two arguments, compute a**b.  If a is negative then b
        must be integral.  The result will be inexact unless b is
        integral and the result is finite and can be expressed exactly
        in 'precision' digits.

        With three arguments, compute (a**b) % modulo.  For the
        three argument form, the following restrictions on the
        arguments hold:

         - all three arguments must be integral
         - b must be nonnegative
         - at least one of a or b must be nonzero
         - modulo must be nonzero and have at most 'precision' digits

        The result of pow(a, b, modulo) is identical to the result
        that would be obtained by computing (a**b) % modulo with
        unbounded precision, but is computed more efficiently.  It is
        always exact.

        >>> c = ExtendedContext.copy()
        >>> c.Emin = -999
        >>> c.Emax = 999
        >>> c.power(Decimal('2'), Decimal('3'))
        Decimal('8')
        >>> c.power(Decimal('-2'), Decimal('3'))
        Decimal('-8')
        >>> c.power(Decimal('2'), Decimal('-3'))
        Decimal('0.125')
        >>> c.power(Decimal('1.7'), Decimal('8'))
        Decimal('69.7575744')
        >>> c.power(Decimal('10'), Decimal('0.301029996'))
        Decimal('2.00000000')
        >>> c.power(Decimal('Infinity'), Decimal('-1'))
        Decimal('0')
        >>> c.power(Decimal('Infinity'), Decimal('0'))
        Decimal('1')
        >>> c.power(Decimal('Infinity'), Decimal('1'))
        Decimal('Infinity')
        >>> c.power(Decimal('-Infinity'), Decimal('-1'))
        Decimal('-0')
        >>> c.power(Decimal('-Infinity'), Decimal('0'))
        Decimal('1')
        >>> c.power(Decimal('-Infinity'), Decimal('1'))
        Decimal('-Infinity')
        >>> c.power(Decimal('-Infinity'), Decimal('2'))
        Decimal('Infinity')
        >>> c.power(Decimal('0'), Decimal('0'))
        Decimal('NaN')

        >>> c.power(Decimal('3'), Decimal('7'), Decimal('16'))
        Decimal('11')
        >>> c.power(Decimal('-3'), Decimal('7'), Decimal('16'))
        Decimal('-11')
        >>> c.power(Decimal('-3'), Decimal('8'), Decimal('16'))
        Decimal('1')
        >>> c.power(Decimal('3'), Decimal('7'), Decimal('-16'))
        Decimal('11')
        >>> c.power(Decimal('23E12345'), Decimal('67E189'), Decimal('123456789'))
        Decimal('11729830')
        >>> c.power(Decimal('-0'), Decimal('17'), Decimal('1729'))
        Decimal('-0')
        >>> c.power(Decimal('-23'), Decimal('0'), Decimal('65537'))
        Decimal('1')
        >>> ExtendedContext.power(7, 7)
        Decimal('823543')
        >>> ExtendedContext.power(Decimal(7), 7)
        Decimal('823543')
        >>> ExtendedContext.power(7, Decimal(7), 2)
        Decimal('1')
        R�RsUnable to convert %s to DecimalN(R�R(R%R�Rf(RRRbR�R�((s/usr/lib64/python2.7/decimal.pytpoweres
IcCs%t|dt�}|j|d|�S(s
Returns a value equal to 'a' (rounded), having the exponent of 'b'.

        The coefficient of the result is derived from that of the left-hand
        operand.  It may be rounded using the current rounding setting (if the
        exponent is being increased), multiplied by a positive power of ten (if
        the exponent is being decreased), or is unchanged (if the exponent is
        already equal to that of the right-hand operand).

        Unlike other operations, if the length of the coefficient after the
        quantize operation would be greater than precision then an Invalid
        operation condition is raised.  This guarantees that, unless there is
        an error condition, the exponent of the result of a quantize is always
        equal to that of the right-hand operand.

        Also unlike other operations, quantize will never raise Underflow, even
        if the result is subnormal and inexact.

        >>> ExtendedContext.quantize(Decimal('2.17'), Decimal('0.001'))
        Decimal('2.170')
        >>> ExtendedContext.quantize(Decimal('2.17'), Decimal('0.01'))
        Decimal('2.17')
        >>> ExtendedContext.quantize(Decimal('2.17'), Decimal('0.1'))
        Decimal('2.2')
        >>> ExtendedContext.quantize(Decimal('2.17'), Decimal('1e+0'))
        Decimal('2')
        >>> ExtendedContext.quantize(Decimal('2.17'), Decimal('1e+1'))
        Decimal('0E+1')
        >>> ExtendedContext.quantize(Decimal('-Inf'), Decimal('Infinity'))
        Decimal('-Infinity')
        >>> ExtendedContext.quantize(Decimal('2'), Decimal('Infinity'))
        Decimal('NaN')
        >>> ExtendedContext.quantize(Decimal('-0.1'), Decimal('1'))
        Decimal('-0')
        >>> ExtendedContext.quantize(Decimal('-0'), Decimal('1e+5'))
        Decimal('-0E+5')
        >>> ExtendedContext.quantize(Decimal('+35236450.6'), Decimal('1e-2'))
        Decimal('NaN')
        >>> ExtendedContext.quantize(Decimal('-35236450.6'), Decimal('1e-2'))
        Decimal('NaN')
        >>> ExtendedContext.quantize(Decimal('217'), Decimal('1e-1'))
        Decimal('217.0')
        >>> ExtendedContext.quantize(Decimal('217'), Decimal('1e-0'))
        Decimal('217')
        >>> ExtendedContext.quantize(Decimal('217'), Decimal('1e+1'))
        Decimal('2.2E+2')
        >>> ExtendedContext.quantize(Decimal('217'), Decimal('1e+2'))
        Decimal('2E+2')
        >>> ExtendedContext.quantize(1, 2)
        Decimal('1')
        >>> ExtendedContext.quantize(Decimal(1), 2)
        Decimal('1')
        >>> ExtendedContext.quantize(1, Decimal(2))
        Decimal('1')
        R�R(R�R(R,(RRRb((s/usr/lib64/python2.7/decimal.pyR,�s7cCs
td�S(skJust returns 10, as this is Decimal, :)

        >>> ExtendedContext.radix()
        Decimal('10')
        i
(R(R((s/usr/lib64/python2.7/decimal.pyRq�scCsNt|dt�}|j|d|�}|tkrFtd|��n|SdS(sReturns the remainder from integer division.

        The result is the residue of the dividend after the operation of
        calculating integer division as described for divide-integer, rounded
        to precision digits if necessary.  The sign of the result, if
        non-zero, is the same as that of the original dividend.

        This operation will fail under the same conditions as integer division
        (that is, if integer division on the same two operands would fail, the
        remainder cannot be calculated).

        >>> ExtendedContext.remainder(Decimal('2.1'), Decimal('3'))
        Decimal('2.1')
        >>> ExtendedContext.remainder(Decimal('10'), Decimal('3'))
        Decimal('1')
        >>> ExtendedContext.remainder(Decimal('-10'), Decimal('3'))
        Decimal('-1')
        >>> ExtendedContext.remainder(Decimal('10.2'), Decimal('1'))
        Decimal('0.2')
        >>> ExtendedContext.remainder(Decimal('10'), Decimal('0.3'))
        Decimal('0.1')
        >>> ExtendedContext.remainder(Decimal('3.6'), Decimal('1.3'))
        Decimal('1.0')
        >>> ExtendedContext.remainder(22, 6)
        Decimal('4')
        >>> ExtendedContext.remainder(Decimal(22), 6)
        Decimal('4')
        >>> ExtendedContext.remainder(22, Decimal(6))
        Decimal('4')
        R�RsUnable to convert %s to DecimalN(R�R(R�R�Rf(RRRbR�((s/usr/lib64/python2.7/decimal.pyR��s
cCs%t|dt�}|j|d|�S(sGReturns to be "a - b * n", where n is the integer nearest the exact
        value of "x / b" (if two integers are equally near then the even one
        is chosen).  If the result is equal to 0 then its sign will be the
        sign of a.

        This operation will fail under the same conditions as integer division
        (that is, if integer division on the same two operands would fail, the
        remainder cannot be calculated).

        >>> ExtendedContext.remainder_near(Decimal('2.1'), Decimal('3'))
        Decimal('-0.9')
        >>> ExtendedContext.remainder_near(Decimal('10'), Decimal('6'))
        Decimal('-2')
        >>> ExtendedContext.remainder_near(Decimal('10'), Decimal('3'))
        Decimal('1')
        >>> ExtendedContext.remainder_near(Decimal('-10'), Decimal('3'))
        Decimal('-1')
        >>> ExtendedContext.remainder_near(Decimal('10.2'), Decimal('1'))
        Decimal('0.2')
        >>> ExtendedContext.remainder_near(Decimal('10'), Decimal('0.3'))
        Decimal('0.1')
        >>> ExtendedContext.remainder_near(Decimal('3.6'), Decimal('1.3'))
        Decimal('-0.3')
        >>> ExtendedContext.remainder_near(3, 11)
        Decimal('3')
        >>> ExtendedContext.remainder_near(Decimal(3), 11)
        Decimal('3')
        >>> ExtendedContext.remainder_near(3, Decimal(11))
        Decimal('3')
        R�R(R�R(R�(RRRb((s/usr/lib64/python2.7/decimal.pyR�scCs%t|dt�}|j|d|�S(sNReturns a rotated copy of a, b times.

        The coefficient of the result is a rotated copy of the digits in
        the coefficient of the first operand.  The number of places of
        rotation is taken from the absolute value of the second operand,
        with the rotation being to the left if the second operand is
        positive or to the right otherwise.

        >>> ExtendedContext.rotate(Decimal('34'), Decimal('8'))
        Decimal('400000003')
        >>> ExtendedContext.rotate(Decimal('12'), Decimal('9'))
        Decimal('12')
        >>> ExtendedContext.rotate(Decimal('123456789'), Decimal('-2'))
        Decimal('891234567')
        >>> ExtendedContext.rotate(Decimal('123456789'), Decimal('0'))
        Decimal('123456789')
        >>> ExtendedContext.rotate(Decimal('123456789'), Decimal('+2'))
        Decimal('345678912')
        >>> ExtendedContext.rotate(1333333, 1)
        Decimal('13333330')
        >>> ExtendedContext.rotate(Decimal(1333333), 1)
        Decimal('13333330')
        >>> ExtendedContext.rotate(1333333, Decimal(1))
        Decimal('13333330')
        R�R(R�R(Rv(RRRb((s/usr/lib64/python2.7/decimal.pyRv?scCst|dt�}|j|�S(s�Returns True if the two operands have the same exponent.

        The result is never affected by either the sign or the coefficient of
        either operand.

        >>> ExtendedContext.same_quantum(Decimal('2.17'), Decimal('0.001'))
        False
        >>> ExtendedContext.same_quantum(Decimal('2.17'), Decimal('0.01'))
        True
        >>> ExtendedContext.same_quantum(Decimal('2.17'), Decimal('1'))
        False
        >>> ExtendedContext.same_quantum(Decimal('Inf'), Decimal('-Inf'))
        True
        >>> ExtendedContext.same_quantum(10000, -1)
        True
        >>> ExtendedContext.same_quantum(Decimal(10000), -1)
        True
        >>> ExtendedContext.same_quantum(10000, Decimal(-1))
        True
        R�(R�R(R.(RRRb((s/usr/lib64/python2.7/decimal.pyR.\scCs%t|dt�}|j|d|�S(s3Returns the first operand after adding the second value its exp.

        >>> ExtendedContext.scaleb(Decimal('7.50'), Decimal('-2'))
        Decimal('0.0750')
        >>> ExtendedContext.scaleb(Decimal('7.50'), Decimal('0'))
        Decimal('7.50')
        >>> ExtendedContext.scaleb(Decimal('7.50'), Decimal('3'))
        Decimal('7.50E+3')
        >>> ExtendedContext.scaleb(1, 4)
        Decimal('1E+4')
        >>> ExtendedContext.scaleb(Decimal(1), 4)
        Decimal('1E+4')
        >>> ExtendedContext.scaleb(1, Decimal(4))
        Decimal('1E+4')
        R�R(R�R(Ry(RRRb((s/usr/lib64/python2.7/decimal.pyRytscCs%t|dt�}|j|d|�S(s{Returns a shifted copy of a, b times.

        The coefficient of the result is a shifted copy of the digits
        in the coefficient of the first operand.  The number of places
        to shift is taken from the absolute value of the second operand,
        with the shift being to the left if the second operand is
        positive or to the right otherwise.  Digits shifted into the
        coefficient are zeros.

        >>> ExtendedContext.shift(Decimal('34'), Decimal('8'))
        Decimal('400000000')
        >>> ExtendedContext.shift(Decimal('12'), Decimal('9'))
        Decimal('0')
        >>> ExtendedContext.shift(Decimal('123456789'), Decimal('-2'))
        Decimal('1234567')
        >>> ExtendedContext.shift(Decimal('123456789'), Decimal('0'))
        Decimal('123456789')
        >>> ExtendedContext.shift(Decimal('123456789'), Decimal('+2'))
        Decimal('345678900')
        >>> ExtendedContext.shift(88888888, 2)
        Decimal('888888800')
        >>> ExtendedContext.shift(Decimal(88888888), 2)
        Decimal('888888800')
        >>> ExtendedContext.shift(88888888, Decimal(2))
        Decimal('888888800')
        R�R(R�R(R�(RRRb((s/usr/lib64/python2.7/decimal.pyR��scCs"t|dt�}|jd|�S(s�Square root of a non-negative number to context precision.

        If the result must be inexact, it is rounded using the round-half-even
        algorithm.

        >>> ExtendedContext.sqrt(Decimal('0'))
        Decimal('0')
        >>> ExtendedContext.sqrt(Decimal('-0'))
        Decimal('-0')
        >>> ExtendedContext.sqrt(Decimal('0.39'))
        Decimal('0.624499800')
        >>> ExtendedContext.sqrt(Decimal('100'))
        Decimal('10')
        >>> ExtendedContext.sqrt(Decimal('1'))
        Decimal('1')
        >>> ExtendedContext.sqrt(Decimal('1.0'))
        Decimal('1.0')
        >>> ExtendedContext.sqrt(Decimal('1.00'))
        Decimal('1.0')
        >>> ExtendedContext.sqrt(Decimal('7'))
        Decimal('2.64575131')
        >>> ExtendedContext.sqrt(Decimal('10'))
        Decimal('3.16227766')
        >>> ExtendedContext.sqrt(2)
        Decimal('1.41421356')
        >>> ExtendedContext.prec
        9
        R�R(R�R(R7(RR((s/usr/lib64/python2.7/decimal.pyR7�scCsNt|dt�}|j|d|�}|tkrFtd|��n|SdS(s&Return the difference between the two operands.

        >>> ExtendedContext.subtract(Decimal('1.3'), Decimal('1.07'))
        Decimal('0.23')
        >>> ExtendedContext.subtract(Decimal('1.3'), Decimal('1.30'))
        Decimal('0.00')
        >>> ExtendedContext.subtract(Decimal('1.3'), Decimal('2.07'))
        Decimal('-0.77')
        >>> ExtendedContext.subtract(8, 5)
        Decimal('3')
        >>> ExtendedContext.subtract(Decimal(8), 5)
        Decimal('3')
        >>> ExtendedContext.subtract(8, Decimal(5))
        Decimal('3')
        R�RsUnable to convert %s to DecimalN(R�R(R�R�Rf(RRRbR�((s/usr/lib64/python2.7/decimal.pytsubtract�s
cCs"t|dt�}|jd|�S(s�Convert to a string, using engineering notation if an exponent is needed.

        Engineering notation has an exponent which is a multiple of 3.  This
        can leave up to 3 digits to the left of the decimal place and may
        require the addition of either one or two trailing zeros.

        The operation is not affected by the context.

        >>> ExtendedContext.to_eng_string(Decimal('123E+1'))
        '1.23E+3'
        >>> ExtendedContext.to_eng_string(Decimal('123E+3'))
        '123E+3'
        >>> ExtendedContext.to_eng_string(Decimal('123E-10'))
        '12.3E-9'
        >>> ExtendedContext.to_eng_string(Decimal('-123E-12'))
        '-123E-12'
        >>> ExtendedContext.to_eng_string(Decimal('7E-7'))
        '700E-9'
        >>> ExtendedContext.to_eng_string(Decimal('7E+1'))
        '70'
        >>> ExtendedContext.to_eng_string(Decimal('0E+1'))
        '0.00E+3'

        R�R(R�R(R�(RR((s/usr/lib64/python2.7/decimal.pyR��scCs"t|dt�}|jd|�S(syConverts a number to a string, using scientific notation.

        The operation is not affected by the context.
        R�R(R�R(R�(RR((s/usr/lib64/python2.7/decimal.pyt
to_sci_string�scCs"t|dt�}|jd|�S(skRounds to an integer.

        When the operand has a negative exponent, the result is the same
        as using the quantize() operation using the given operand as the
        left-hand-operand, 1E+0 as the right-hand-operand, and the precision
        of the operand as the precision setting; Inexact and Rounded flags
        are allowed in this operation.  The rounding mode is taken from the
        context.

        >>> ExtendedContext.to_integral_exact(Decimal('2.1'))
        Decimal('2')
        >>> ExtendedContext.to_integral_exact(Decimal('100'))
        Decimal('100')
        >>> ExtendedContext.to_integral_exact(Decimal('100.0'))
        Decimal('100')
        >>> ExtendedContext.to_integral_exact(Decimal('101.5'))
        Decimal('102')
        >>> ExtendedContext.to_integral_exact(Decimal('-101.5'))
        Decimal('-102')
        >>> ExtendedContext.to_integral_exact(Decimal('10E+5'))
        Decimal('1.0E+6')
        >>> ExtendedContext.to_integral_exact(Decimal('7.89E+77'))
        Decimal('7.89E+77')
        >>> ExtendedContext.to_integral_exact(Decimal('-Inf'))
        Decimal('-Infinity')
        R�R(R�R(R2(RR((s/usr/lib64/python2.7/decimal.pyR2scCs"t|dt�}|jd|�S(sLRounds to an integer.

        When the operand has a negative exponent, the result is the same
        as using the quantize() operation using the given operand as the
        left-hand-operand, 1E+0 as the right-hand-operand, and the precision
        of the operand as the precision setting, except that no flags will
        be set.  The rounding mode is taken from the context.

        >>> ExtendedContext.to_integral_value(Decimal('2.1'))
        Decimal('2')
        >>> ExtendedContext.to_integral_value(Decimal('100'))
        Decimal('100')
        >>> ExtendedContext.to_integral_value(Decimal('100.0'))
        Decimal('100')
        >>> ExtendedContext.to_integral_value(Decimal('101.5'))
        Decimal('102')
        >>> ExtendedContext.to_integral_value(Decimal('-101.5'))
        Decimal('-102')
        >>> ExtendedContext.to_integral_value(Decimal('10E+5'))
        Decimal('1.0E+6')
        >>> ExtendedContext.to_integral_value(Decimal('7.89E+77'))
        Decimal('7.89E+77')
        >>> ExtendedContext.to_integral_value(Decimal('-Inf'))
        Decimal('-Infinity')
        R�R(R�R(R�(RR((s/usr/lib64/python2.7/decimal.pyR�sN(RR!R"R#RAR�R�R<R3R;R~RURiR�R�R�R�R�R4R�R�R\R�R�R<R�R=R8RER�R�R�RFR�R�R�RJR�RIRJR-R�RKR�RLR�RMRNRURXRYRcReRfRdR�RgR�RhR�R�RkRlRnR)RpR�R�R,RqR�R�RvR.RyR�R7R�R�R�R2R�R�(((s/usr/lib64/python2.7/decimal.pyR�s�"																$	#			
	
	
		%																											 			#		2	P	:		&	"					 					R]cBs)eZdZdd�Zd�ZeZRS(R.RHRJcCs�|dkr*d|_d|_d|_nct|t�rf|j|_t|j�|_|j|_n'|d|_|d|_|d|_dS(Niii(	RAR.RHRJRQRR&R'RD(RRh((s/usr/lib64/python2.7/decimal.pyR�Ds		

cCsd|j|j|jfS(Ns(%r, %r, %r)(R.RHRJ(R((s/usr/lib64/python2.7/decimal.pyR�Ss(R.RHRJN(R!R"R�RAR�R�R�(((s/usr/lib64/python2.7/decimal.pyR]>s	icCs�|j|jkr!|}|}n|}|}tt|j��}tt|j��}|jtd||d�}||jd|kr�d|_||_n|jd|j|j9_|j|_||fS(scNormalizes op1, op2 to have the same exp and length of coefficient.

    Done during addition.
    i����iii
(RJRXRWRHR�(R�R�R4ttmpR|ttmp_lent	other_lenRJ((s/usr/lib64/python2.7/decimal.pyR�Zs		iRFiR�it2t3it4t5t6t7t8R2RRbR5RvR�RucCs?|dkrtd��nd|}dt|�||dS(s[Number of bits in binary representation of the positive integer n,
    or 0 if n == 0.
    is-The argument to _nbits should be nonnegative.s%xi(R`RX(R$t
correctionthex_n((s/usr/lib64/python2.7/decimal.pyR}s
cCs{|dkrdS|dkr(|d|Stt|��}t|�t|jd��}||krjdS|d|SdS(s Given integers n and e, return n * 10**e if it's an integer, else None.

    The computation is designed to avoid computing large powers of 10
    unnecessarily.

    >>> _decimal_lshift_exact(3, 4)
    30000
    >>> _decimal_lshift_exact(300, -999999999)  # returns None

    ii
RFN(RWR\RXR�RA(R$R�tstr_ntval_n((s/usr/lib64/python2.7/decimal.pyR�scCs^|dks|dkr'td��nd}x*||krY||||d?}}q0W|S(s�Closest integer to the square root of the positive integer n.  a is
    an initial approximation to the square root.  Any positive integer
    will do for a, but the closer a is to the square root of n the
    faster convergence will be.

    is3Both arguments to _sqrt_nearest should be positive.i(R`(R$RRb((s/usr/lib64/python2.7/decimal.pyt
_sqrt_nearest�scCs7d|>||?}}|d||d@|d@|kS(s�Given an integer x and a nonnegative integer shift, return closest
    integer to x / 2**shift; use round-to-even in case of a tie.

    lii((R	R�RbR�((s/usr/lib64/python2.7/decimal.pyt_rshift_nearest�scCs/t||�\}}|d||d@|kS(saClosest integer to a/b, a and b positive integers; rounds to even
    in the case of a tie.

    ii(R�(RRbR�R�((s/usr/lib64/python2.7/decimal.pyt_div_nearest�sic		CsC||}d}x�||kr?tt|��||>|kse||kr�t|�||?|kr�tt||�d>|t||t||�|��}|d7}qWtdtt|��d|�}t||�}t||�}x>t|ddd�D]&}t||�t|||�}qWt|||�S(s�Integer approximation to M*log(x/M), with absolute error boundable
    in terms only of x/M.

    Given positive integers x and M, return an integer approximation to
    M * log(x/M).  For L = 8 and 0.1 <= x/M <= 10 the difference
    between the approximation and the exact result is at most 22.  For
    L = 8 and 1.0 <= x/M <= 10.0 the difference is at most 15.  In
    both cases these are upper bounds on the error; it will usually be
    much smaller.iii����ii����(	R[R\R�R�R�RHRXRWR�(	R	tMtLRtRtTtyshifttwRw((s/usr/lib64/python2.7/decimal.pyt_ilog�s
/&'%$c
Cs�|d7}tt|��}||||dk}|dkr�d|}|||}|dkru|d|9}nt|d|�}t||�}t|�}t|||�}||}	nd}t|d|�}	t|	|d�S(s�Given integers c, e and p with c > 0, p >= 0, compute an integer
    approximation to 10**p * log10(c*10**e), with an absolute error of
    at most 1.  Assumes that c*10**e is not exactly 1.iiii
id(RXRWR�R�t
_log10_digits(
R5R�RR6RuR�Rwtlog_dtlog_10tlog_tenpower((s/usr/lib64/python2.7/decimal.pyRW�s 


c	Cs|d7}tt|��}||||dk}|dkr�|||}|dkrk|d|9}nt|d|�}t|d|�}nd}|r�ttt|���d}||dkr�t|t||�d|�}qd}nd}t||d�S(s�Given integers c, e and p with c > 0, compute an integer
    approximation to 10**p * log(c*10**e), with an absolute error of
    at most 1.  Assumes that c*10**e is not exactly 1.iiii
id(RXRWR�R�R\R�(	R5R�RR6RuRwR�R"t	f_log_ten((s/usr/lib64/python2.7/decimal.pyRTs"
$	t
_Log10MemoizecBs eZdZd�Zd�ZRS(s�Class to compute, store, and allow retrieval of, digits of the
    constant log(10) = 2.302585....  This constant is needed by
    Decimal.ln, Decimal.log10, Decimal.exp and Decimal.__pow__.cCs
d|_dS(Nt/23025850929940456840179914546843642076011014886(Rl(R((s/usr/lib64/python2.7/decimal.pyR�@scCs�|dkrtd��n|t|j�kr�d}xatr�d||d}tttd||�d��}||d|kr�Pn|d7}q9W|jd�d |_nt|j|d	 �S(
stGiven an integer p >= 0, return floor(10**p)*log(10).

        For example, self.getdigits(3) returns 2302.
        isp should be nonnegativeii
iidRFi����i(	R`RXRlR(RWR�R�R�RH(RRR"R�Rl((s/usr/lib64/python2.7/decimal.pyt	getdigitsCs		"(R!R"R#R�R�(((s/usr/lib64/python2.7/decimal.pyR�<s	c	Cs�tt|�|>|�}tdtt|��d|�}t||�}t|�|>}x9t|ddd�D]!}t|||||�}quWxIt|ddd�D]1}t|�|d>}t||||�}q�W||S(s�Given integers x and M, M > 0, such that x/M is small in absolute
    value, compute an integer approximation to M*exp(x/M).  For 0 <=
    x/M <= 2.4, the absolute error in the result is bounded by 60 (and
    is usually much smaller).i����iiii����i(RR[RHRXRWR�R�(	R	R�R�R�R�RtMshiftRRw((s/usr/lib64/python2.7/decimal.pyt_iexpas%c	Cs�|d7}td|tt|��d�}||}||}|dkr^|d|}n|d|}t|t|��\}}t|d|�}tt|d|�d�||dfS(s�Compute an approximation to exp(c*10**e), with p decimal places of
    precision.

    Returns integers d, f such that:

      10**(p-1) <= d <= 10**p, and
      (d-1)*10**f < exp(c*10**e) < (d+1)*10**f

    In other words, d*10**f is an approximation to exp(c*10**e) with p
    digits of precision, and with an error in d of at most 1.  This is
    almost, but not quite, the same as the error being < 1ulp: when d
    = 10**(p-1) the error could be up to 10 ulp.iiii
i�i(R�RXRWR�R�R�R�(	R5R�RR"R�R�tcshifttquotR((s/usr/lib64/python2.7/decimal.pyRG�s
#

cCs*ttt|���|}t||||d�}||}|dkra||d|}nt||d|�}|dkr�tt|��|dk|dkkr�d|ddd|}	}
q d|d|}	}
n:t||d|d�\}	}
t|	d�}	|
d7}
|	|
fS(s5Given integers xc, xe, yc and ye representing Decimals x = xc*10**xe and
    y = yc*10**ye, compute x**y.  Returns a pair of integers (c, e) such that:

      10**(p-1) <= c <= 10**p, and
      (c-1)*10**e < x**y < (c+1)*10**e

    in other words, c*10**e is an approximation to x**y with p digits
    of precision, and with an error in c of at most 1.  (This is
    almost, but not quite, the same as the error being < 1ulp: when c
    == 10**(p-1) we can only guarantee error < 10ulp.)

    We assume that: x is positive and not equal to 1, and y is nonzero.
    iii
(RXRWR\RTR�RG(R
RR
RRRbtlxcR�tpcR�RJ((s/usr/lib64/python2.7/decimal.pyR�s
( !
idiFi5i(iiii
icCsA|dkrtd��nt|�}dt|�||dS(s@Compute a lower bound for 100*log10(c) for a positive integer c.is0The argument to _log10_lb should be nonnegative.id(R`RWRX(R5R�tstr_c((s/usr/lib64/python2.7/decimal.pyR�scCsqt|t�r|St|ttf�r2t|�S|rTt|t�rTtj|�S|rmtd|��ntS(s�Convert other to Decimal.

    Verifies that it's ok to use in an implicit construction.
    If allow_float is true, allow conversion from float;  this
    is used in the comparison methods (__eq__ and friends).

    sUnable to convert %s to Decimal(RQRRHR[RdReRfR�(R|R�R�((s/usr/lib64/python2.7/decimal.pyR��s

R4iR3RRR5i�ɚ;R*i6e�R�i	s�        # A numeric string consists of:
#    \s*
    (?P<sign>[-+])?              # an optional sign, followed by either...
    (
        (?=\d|\.\d)              # ...a number (with at least one digit)
        (?P<int>\d*)             # having a (possibly empty) integer part
        (\.(?P<frac>\d*))?       # followed by an optional fractional part
        (E(?P<exp>[-+]?\d+))?    # followed by an optional exponent, or...
    |
        Inf(inity)?              # ...an infinity, or...
    |
        (?P<signal>s)?           # ...an (optionally signaling)
        NaN                      # NaN
        (?P<diag>\d*)            # with (possibly empty) diagnostic info.
    )
#    \s*
    \Z
s0*$s50*$s�\A
(?:
   (?P<fill>.)?
   (?P<align>[<>=^])
)?
(?P<sign>[-+ ])?
(?P<zeropad>0)?
(?P<minimumwidth>(?!0)\d+)?
(?P<thousands_sep>,)?
(?:\.(?P<precision>0|(?!0)\d+))?
(?P<type>[eEfFgGn%])?
\Z
cCs`tj|�}|dkr.td|��n|j�}|d}|d}|ddk	|d<|dr�|dk	r�td|��n|dk	r�td|��q�n|p�d|d<|p�d|d<|d	dkr�d
|d	<nt|dp�d�|d<|d
dk	r+t|d
�|d
<n|d
dkrk|ddks[|ddkrkd|d
<qkn|ddkr�d|d<|dkr�tj�}n|ddk	r�td|��n|d|d<|d|d<|d|d<n7|ddkr
d|d<nddg|d<d|d<yt|t	�|d<Wnt
k
r[t|d<nX|S(sParse and validate a format specifier.

    Turns a standard numeric format specifier into a dict, with the
    following entries:

      fill: fill character to pad field to minimum width
      align: alignment type, either '<', '>', '=' or '^'
      sign: either '+', '-' or ' '
      minimumwidth: nonnegative integer giving minimum width
      zeropad: boolean, indicating whether to pad with zeros
      thousands_sep: string to use as thousands separator, or ''
      grouping: grouping for thousands separators, in format
        used by localeconv
      decimal_point: string to use for decimal point
      precision: nonnegative integer giving precision, or None
      type: one of the characters 'eEfFgG%', or None
      unicode: boolean (always True for Python 3.x)

    sInvalid format specifier: tfilltaligntzeropads7Fill character conflicts with '0' in format specifier: s2Alignment conflicts with '0' in format specifier: t t>R.RGtminimumwidthRFR�iR}R�iR$R�t
thousands_sepsJExplicit thousands separator conflicts with 'n' type in format specifier: tgroupingt
decimal_pointRiR�tunicodeN(t_parse_format_specifier_regextmatchRAR`t	groupdictRHt_localet
localeconvRQR�R�RY(tformat_specR�Ritformat_dictR�R�((s/usr/lib64/python2.7/decimal.pyR�XsV




 




c	Cs�|d}|d}||t|�t|�}|d}|dkrY|||}n|dkrv|||}nb|dkr�|||}nE|dkr�t|�d}|| ||||}ntd	��|d
r�t|�}n|S(sGiven an unpadded, non-aligned numeric string 'body' and sign
    string 'sign', add padding and alignment conforming to the given
    format specifier dictionary 'spec' (as produced by
    parse_format_specifier).

    Also converts result to unicode if necessary.

    R�R�R�t<R�t=t^isUnrecognised alignment fieldR�(RXR`R�(	R.R�R�R�R�tpaddingR�Rxthalf((s/usr/lib64/python2.7/decimal.pyR��s"




cCs�ddlm}m}|s gS|ddkr]t|�dkr]||d ||d��S|dtjkrx|d Std��dS(syConvert a localeconv-style grouping into a (possibly infinite)
    iterable of integers representing group lengths.

    i����(tchaintrepeatiii����s unrecognised format for groupingN(t	itertoolsRRRXR�tCHAR_MAXR`(R�RR((s/usr/lib64/python2.7/decimal.pyt_group_lengths�s
"cCs|d}|d}g}x�t|�D]�}|dkrHtd��nttt|�|d�|�}|jd|t|�||�|| }||8}|r�|dkr�Pn|t|�8}q'Wtt|�|d�}|jd|t|�||�|jt|��S(snInsert thousands separators into a digit string.

    spec is a dictionary whose keys should include 'thousands_sep' and
    'grouping'; typically it's the result of parsing the format
    specifier using _parse_format_specifier.

    The min_width keyword argument gives the minimum length of the
    result, which will be padded on the left with zeros if necessary.

    If necessary, the zero padding adds an extra '0' on the left to
    avoid a leading thousands separator.  For example, inserting
    commas every three digits in '123456', with min_width=8, gives
    '0,123,456', even though that has length 9.

    R�R�isgroup length should be positiveiRF(RR`R�R�RXRaRbtreversed(RlR�t	min_widthtsepR�tgroupsR6((s/usr/lib64/python2.7/decimal.pyt_insert_thousands_sep�s 

!$
$cCs*|r
dS|ddkr"|dSdSdS(sDetermine sign character.RGR.s +RN((tis_negativeR�((s/usr/lib64/python2.7/decimal.pyR�s
cCs�t||�}|r&|d|}n|dksB|ddkr�idd6dd6dd6dd6|d}|d	j||�7}n|dd
kr�|d
7}n|dr�|dt|�t|�}nd}t|||�}t||||�S(
scFormat a number, given the following data:

    is_negative: true if the number is negative, else false
    intpart: string of digits that must appear before the decimal point
    fracpart: string of digits that must come after the point
    exp: exponent, as an integer
    spec: dictionary resulting from parsing the format specifier

    This function uses the information in spec to:
      insert separators (decimal separator and thousands separators)
      format the sign
      format the exponent
      add trailing '%' for the '%' type
      zero-pad if necessary
      fill and align if necessary
    R�iR}R�R�R�R�R�s{0}{1:+}R�R�R�(R�tformatRXRR�(RRjRkRJR�R.techarR((s/usr/lib64/python2.7/decimal.pyR�s*

!tInfs-InfR�t__main__(kR#t__all__t__version__tmathRntnumberst_numberstcollectionsRt_namedtupleRtImportErrorRRRRRRRRtArithmeticErrorRRRR,tZeroDivisionErrorRR/R0R	R1R
RRR
RR�R=R8ROR6R9R?thasattrR>R:RRRARRRYR%tNumbertregisterRBRR]R�RRR�R�R�R�RWRTR�R�R�R�RGRRR�RRRtretcompiletVERBOSEt
IGNORECASEtUNICODER�RSR�R�R�tlocaleR�R�R�RRR�R�RSRRR*R?RR>R-R!tdoctestttestmodR7(((s/usr/lib64/python2.7/decimal.pyt<module>ts0	


&



	

	
	*��������������������#%					0	"	,#%	$	*#%				 
W	!	%	
	)"""Cache lines from files.

This is intended to read lines from modules imported -- hence if a filename
is not found, it will look down the module search path for a file by
that name.
"""

import sys
import os

__all__ = ["getline", "clearcache", "checkcache"]

def getline(filename, lineno, module_globals=None):
    lines = getlines(filename, module_globals)
    if 1 <= lineno <= len(lines):
        return lines[lineno-1]
    else:
        return ''


# The cache

cache = {} # The cache


def clearcache():
    """Clear the cache entirely."""

    global cache
    cache = {}


def getlines(filename, module_globals=None):
    """Get the lines for a file from the cache.
    Update the cache if it doesn't contain an entry for this file already."""

    if filename in cache:
        return cache[filename][2]

    try:
        return updatecache(filename, module_globals)
    except MemoryError:
        clearcache()
        return []


def checkcache(filename=None):
    """Discard cache entries that are out of date.
    (This is not checked upon each call!)"""

    if filename is None:
        filenames = cache.keys()
    else:
        if filename in cache:
            filenames = [filename]
        else:
            return

    for filename in filenames:
        size, mtime, lines, fullname = cache[filename]
        if mtime is None:
            continue   # no-op for files loaded via a __loader__
        try:
            stat = os.stat(fullname)
        except os.error:
            del cache[filename]
            continue
        if size != stat.st_size or mtime != stat.st_mtime:
            del cache[filename]


def updatecache(filename, module_globals=None):
    """Update a cache entry and return its list of lines.
    If something's wrong, print a message, discard the cache entry,
    and return an empty list."""

    if filename in cache:
        del cache[filename]
    if not filename or (filename.startswith('<') and filename.endswith('>')):
        return []

    fullname = filename
    try:
        stat = os.stat(fullname)
    except OSError:
        basename = filename

        # Try for a __loader__, if available
        if module_globals and '__loader__' in module_globals:
            name = module_globals.get('__name__')
            loader = module_globals['__loader__']
            get_source = getattr(loader, 'get_source', None)

            if name and get_source:
                try:
                    data = get_source(name)
                except (ImportError, IOError):
                    pass
                else:
                    if data is None:
                        # No luck, the PEP302 loader cannot find the source
                        # for this module.
                        return []
                    cache[filename] = (
                        len(data), None,
                        [line+'\n' for line in data.splitlines()], fullname
                    )
                    return cache[filename][2]

        # Try looking through the module search path, which is only useful
        # when handling a relative filename.
        if os.path.isabs(filename):
            return []

        for dirname in sys.path:
            # When using imputil, sys.path may contain things other than
            # strings; ignore them when it happens.
            try:
                fullname = os.path.join(dirname, basename)
            except (TypeError, AttributeError):
                # Not sufficiently string-like to do anything useful with.
                continue
            try:
                stat = os.stat(fullname)
                break
            except os.error:
                pass
        else:
            return []
    try:
        with open(fullname, 'rU') as fp:
            lines = fp.readlines()
    except IOError:
        return []
    if lines and not lines[-1].endswith('\n'):
        lines[-1] += '\n'
    size, mtime = stat.st_size, stat.st_mtime
    cache[filename] = size, mtime, lines, fullname
    return lines
�
zfc@sdZddlZddlZddlZyddlZWnek
rSdZnXdgZdZdZ	dZ
ejdkr�ejZ
n	ejZ
dZd	�Zd
�Zddd��YZdde
e	d
�Zdde
e
e	d�Zddd�Zedkreje��ndS(s�	Tool for measuring execution time of small code snippets.

This module avoids a number of common traps for measuring execution
times.  See also Tim Peters' introduction to the Algorithms chapter in
the Python Cookbook, published by O'Reilly.

Library usage: see the Timer class.

Command line usage:
    python timeit.py [-n N] [-r N] [-s S] [-t] [-c] [-h] [--] [statement]

Options:
  -n/--number N: how many times to execute 'statement' (default: see below)
  -r/--repeat N: how many times to repeat the timer (default 3)
  -s/--setup S: statement to be executed once initially (default 'pass')
  -t/--time: use time.time() (default on Unix)
  -c/--clock: use time.clock() (default on Windows)
  -v/--verbose: print raw timing results; repeat for more digits precision
  -h/--help: print this usage message and exit
  --: separate options from statement, use when statement starts with -
  statement: statement to be timed (default 'pass')

A multi-line statement may be given by specifying each line as a
separate argument; indented lines are possible by enclosing an
argument in quotes and using leading spaces.  Multiple -s options are
treated similarly.

If -n is not given, a suitable number of loops is calculated by trying
successive powers of 10 until the total time is at least 0.2 seconds.

The difference in default timer function is because on Windows,
clock() has microsecond granularity but time()'s granularity is 1/60th
of a second; on Unix, clock() has 1/100th of a second granularity and
time() is much more precise.  On either platform, the default timer
functions measure wall clock time, not the CPU time.  This means that
other processes running on the same computer may interfere with the
timing.  The best thing to do when accurate timing is necessary is to
repeat the timing a few times and use the best time.  The -r option is
good for this; the default of 3 repetitions is probably enough in most
cases.  On Unix, you can use clock() to measure CPU time.

Note: there is a certain baseline overhead associated with executing a
pass statement.  The code here doesn't try to hide it, but you should
be aware of it.  The baseline overhead can be measured by invoking the
program without arguments.

The baseline overhead differs between Python versions!  Also, to
fairly compare older Python versions to Python 2.3, you may want to
use python -O for the older versions to avoid timing SET_LINENO
instructions.
i����NtTimers<timeit-src>i@Bitwin32s�
def inner(_it, _timer%(init)s):
    %(setup)s
    _t0 = _timer()
    for _i in _it:
        %(stmt)s
    _t1 = _timer()
    return _t1 - _t0
cCs|jddd|�S(s*Helper to reindent a multi-line statement.s
t (treplace(tsrctindent((s/usr/lib64/python2.7/timeit.pytreindentZscs|�fd�}|S(s?Create a timer function. Used if the "statement" is a callable.cs9��|�}x|D]
}|�qW|�}||S(N((t_itt_timert_funct_t0t_it_t1(tsetup(s/usr/lib64/python2.7/timeit.pytinner`s	
	((R
tfuncR((R
s/usr/lib64/python2.7/timeit.pyt_template_func^scBsGeZdZdded�Zdd�Zed�Ze	ed�Z
RS(sIClass for timing execution speed of small code snippets.

    The constructor takes a statement to be timed, an additional
    statement used for setup, and a timer function.  Both statements
    default to 'pass'; the timer function is platform-dependent (see
    module doc string).

    To measure the execution time of the first statement, use the
    timeit() method.  The repeat() method is a convenience to call
    timeit() multiple times and return a list of results.

    The statements may contain newlines, as long as they don't contain
    multi-line string literals.
    tpasscs�||_i�e|e�r6e|e�rXe|ed�e|d|ed�ne|ed�e|d�}e|e�r�e|d�}ei|d6|d6dd6}nGe|d	�r�ei|d6d
d6dd6}|�d<ned
��||_	e|ed�}|e
��U�d|_n�e|d	�r�d|_	e|e�rx|���fd�}ne|d	�s�ed
��ne
||�|_ned��dS(s#Constructor.  See class doc string.texecs
iitstmtR
ttinitt__call__s_setup()s, _setup=_setupt_setups&setup is neither a string nor callableRcs�e��UdS(N(tglobals((Rtns(s/usr/lib64/python2.7/timeit.pyR
�ss%stmt is neither a string nor callableN(ttimert
isinstancet
basestringtcompiletdummy_src_nameRttemplatethasattrt
ValueErrorRRRtNoneR(tselfRR
RRtcode((RRs/usr/lib64/python2.7/timeit.pyt__init__ys:	"
	
	cCslddl}ddl}|jdk	rXt|j�d|jjd�tf|jt<n|jd|�dS(s�Helper to print a traceback from the timed code.

        Typical use:

            t = Timer(...)       # outside the try/except
            try:
                t.timeit(...)    # or t.repeat(...)
            except:
                t.print_exc()

        The advantage over the standard traceback is that source lines
        in the compiled template will be displayed.

        The optional file argument directs where the traceback is
        sent; it defaults to sys.stderr.
        i����Ns
tfile(	t	linecachet	tracebackRR"tlentsplitRtcachet	print_exc(R#R&R'R(((s/usr/lib64/python2.7/timeit.pyR,�scCsrtrtjd|�}n
dg|}tj�}tj�z|j||j�}Wd|rmtj�nX|S(s�Time 'number' executions of the main statement.

        To be precise, this executes the setup statement once, and
        then returns the time it takes to execute the main statement
        a number of times, as a float measured in seconds.  The
        argument is the number of times through the loop, defaulting
        to one million.  The main statement, the setup statement and
        the timer function to be used are passed to the constructor.
        N(	t	itertoolstrepeatR"tgct	isenabledtdisableRRtenable(R#tnumbertittgcoldttiming((s/usr/lib64/python2.7/timeit.pyttimeit�s


cCs=g}x0t|�D]"}|j|�}|j|�qW|S(s�Call timeit() a few times.

        This is a convenience function that calls the timeit()
        repeatedly, returning a list of results.  The first argument
        specifies how many times to call timeit(), defaulting to 3;
        the second argument specifies the timer argument, defaulting
        to one million.

        Note: it's tempting to calculate mean and standard deviation
        from the result vector and report these.  However, this is not
        very useful.  In a typical case, the lowest value gives a
        lower bound for how fast your machine can run the given code
        snippet; higher values in the result vector are typically not
        caused by variability in Python's speed, but by other
        processes interfering with your timing accuracy.  So the min()
        of the result is probably the only number you should be
        interested in.  After that, you should look at the entire
        vector and apply common sense rather than statistics.
        (trangeR7tappend(R#R.R3trtitt((s/usr/lib64/python2.7/timeit.pyR.�s
N(t__name__t
__module__t__doc__t
default_timerR%R"R,tdefault_numberR7tdefault_repeatR.(((s/usr/lib64/python2.7/timeit.pyRis
%RcCst|||�j|�S(sCConvenience function to create Timer object and call timeit method.(RR7(RR
RR3((s/usr/lib64/python2.7/timeit.pyR7�scCst|||�j||�S(sCConvenience function to create Timer object and call repeat method.(RR.(RR
RR.R3((s/usr/lib64/python2.7/timeit.pyR.�sc
Cs�|dkrtjd}nddl}y4|j|dddddd	d
dg�\}}Wn!|jk
r}|GHdGHd
SXt}dj|�p�d}d}g}t}	d}
d}x�|D]�\}}
|d,kr�t|
�}n|d-kr|j	|
�n|d.kr3t|
�}	|	dkr3d}	q3n|d/krKt
j
}n|d0krct
j}n|d1kr�|
r�|d7}n|
d7}
n|d2kr�tGdSq�Wdj|�p�d}ddl
}tjjd|j�|dk	r�||�}nt|||�}|dkr�xwtdd �D]c}d |}y|j|�}Wn|j�dSX|
rzd!|||fGHn|d"kr'Pq'q'Wny|j|	|�}Wn|j�dSXt|�}|
rd#Gd$jg|D]}d%||f^q��GHnd&|G|d'|}|d(kr7d)|	||fGHnG|d(}|d(krbd*|	||fGHn|d(}d+|	||fGHdS(3s�Main program, used when run as a script.

    The optional 'args' argument specifies the command line to be parsed,
    defaulting to sys.argv[1:].

    The return value is an exit code to be passed to sys.exit(); it
    may be None to indicate success.

    When an exception happens during timing, a traceback is printed to
    stderr and the return value is 1.  Exceptions at other times
    (including the template compilation) are not caught.

    '_wrap_timer' is an internal interface used for unit testing.  If it
    is not None, it must be a callable that accepts a timer function
    and returns another timer function (used for unit testing).
    ii����Ns
n:s:r:tcvhsnumber=ssetup=srepeat=ttimetclocktverbosethelps#use -h/--help for command line helpis
Riis-ns--numbers-ss--setups-rs--repeats-ts--times-cs--clocks-vs	--verboses-hs--helpi
s%d loops -> %.*g secsg�������?s
raw times:Rs%.*gs	%d loops,g��.Ai�sbest of %d: %.*g usec per loopsbest of %d: %.*g msec per loopsbest of %d: %.*g sec per loop(s-ns--number(s-ss--setup(s-rs--repeat(s-ts--time(s-cs--clock(s-vs	--verbose(s-hs--help(R"tsystargvtgetoptterrorR@tjoinRBtintR9RCRDR?tostpathtinserttcurdirRR8R7R,R.tmin(targst_wrap_timerRItoptsterrRRR3R
R.REt	precisiontotaRMR<R;txR:tbesttusectmsectsec((s/usr/lib64/python2.7/timeit.pytmain�s�	




2

t__main__((R?R/RGRCR-tImportErrorR"t__all__RRARBtplatformRDR@RRRRR7R.R^R=texit(((s/usr/lib64/python2.7/timeit.pyt<module>5s2

		
		�		b�
zfc@s�dZddlZddlTddlmZddlmZyddlZWnek
r_n�Xeed�Z	ddlm
ZddlmZm
Z
mZmZmZmZmZmZmZmZmZydd	lmZWnek
r�nXddlZddlZddlZydd
lmZWn!ek
rNdd
lmZnXyddlZWnek
rxeZnXeedd�Zeed
d�Z ddgZ!e!j"ej#e��e$Z%ej&j'�j(d�r�iZ)de)d<de)d<de)d<de)d<de)d<de)d<de)d<d e)d!<d"e)d#<d$e)d%<d&e)d'<d(e)d)<d*e)d+<d,e)d-<d.e)d/<e!j*d0�nd1d2�Z+d3d4d5d6d7d8d9d:d;d<d=d>d?d@fZ,ej-dAkr�e,dBfZ,nej&dCkre,dDfZ,ndEdFdGdHdIdJfZ.dKe/fdL��YZ0dMe/fdN��YZ1dO�Z2x]e,D]UZ3ee2e3�Z4e3e4_5ee%e3�je4_ee4ee1�Z6e7e1e3e6�qWWe1Z$Z8dPe/fdQ��YZ9e/�Z:e:edR�Z;dS(SsThis module provides socket operations and some related functions.
On Unix, it supports IP (Internet Protocol) and Unix domain sockets.
On other systems, it only supports IP. Functions specific for a
socket are available as methods of the socket object.

Functions:

socket() -- create a new socket object
socketpair() -- create a pair of new socket objects [*]
fromfd() -- create a socket object from an open file descriptor [*]
gethostname() -- return the current hostname
gethostbyname() -- map a hostname to its IP number
gethostbyaddr() -- map an IP number or hostname to DNS info
getservbyname() -- map a service name and a protocol name to a port number
getprotobyname() -- map a protocol name (e.g. 'tcp') to a number
ntohs(), ntohl() -- convert 16, 32 bit int from network to host byte order
htons(), htonl() -- convert 16, 32 bit int from host to network byte order
inet_aton() -- convert IP addr string (123.45.67.89) to 32-bit packed format
inet_ntoa() -- convert 32-bit packed format IP to string (123.45.67.89)
ssl() -- secure socket layer support (only available if configured)
socket.getdefaulttimeout() -- get the default timeout value
socket.setdefaulttimeout() -- set the default timeout value
create_connection() -- connects to an address, with an optional timeout and
                       optional source address.

 [*] not available on all platforms!

Special objects:

SocketType -- type object for socket objects
error -- exception raised for I/O errors
has_ipv6 -- boolean value indicating if IPv6 is supported

Integer constants:

AF_INET, AF_UNIX -- socket domains (first argument to socket() call)
SOCK_STREAM, SOCK_DGRAM, SOCK_RAW -- socket types (second argument)

Many other constants may be defined; these may be used in calls to
the setsockopt() and getsockopt() methods.
i����N(t*(tpartial(t
MethodTypecCs5ddl}tjdtdd�|j|||�S(Ni����s;socket.ssl() is deprecated.  Use ssl.wrap_socket() instead.t
stackleveli(tssltwarningstwarntDeprecationWarningtsslwrap_simple(tsocktkeyfiletcertfilet_realssl((s/usr/lib64/python2.7/socket.pyR:s	
(tSSLError(tRAND_addtRAND_statustSSL_ERROR_ZERO_RETURNtSSL_ERROR_WANT_READtSSL_ERROR_WANT_WRITEtSSL_ERROR_WANT_X509_LOOKUPtSSL_ERROR_SYSCALLt
SSL_ERROR_SSLtSSL_ERROR_WANT_CONNECTt
SSL_ERROR_EOFtSSL_ERROR_INVALID_ERROR_CODE(tRAND_egd(tStringIOtEBADFi	tEINTRitgetfqdntcreate_connectiontwinsThe operation was interrupted.i'sA bad file handle was passed.i'sPermission denied.i's!A fault occurred on the network??i's#An invalid operation was attempted.i&'s The socket operation would blocki3's,A blocking operation is already in progress.i4'sThe network address is in use.i@'sThe connection has been reset.iF'sThe network has been shut down.iJ'sThe operation timed out.iL'sConnection refused.iM'sThe name is too long.iO'sThe host is down.iP'sThe host is unreachable.iQ'terrorTabtcCs�|j�}|s|dkr+t�}nyt|�\}}}Wntk
rWn8X|jd|�x$|D]}d|kroPqoqoW|}|S(sGet fully qualified domain name from name.

    An empty argument is interpreted as meaning the local host.

    First the hostname returned by gethostbyaddr() is checked, then
    possibly existing aliases. In case no FQDN is available, hostname
    from gethostname() is returned.
    s0.0.0.0it.(tstriptgethostnamet
gethostbyaddrterrortinsert(tnamethostnametaliasestipaddrs((s/usr/lib64/python2.7/socket.pyR�s	

tbindtconnectt
connect_extfilenotlistentgetpeernametgetsocknamet
getsockoptt
setsockopttsendalltsetblockingt
settimeoutt
gettimeouttshutdowntnttioctltriscost
sleeptaskwtrecvtrecvfromt	recv_intot
recvfrom_intotsendtsendtot
_closedsocketcBs7eZgZd�ZeZZZZZZ	eZ
RS(cGsttd��dS(NsBad file descriptor(R&R(targs((s/usr/lib64/python2.7/socket.pyt_dummy�s(t__name__t
__module__t	__slots__RFRBR>R@RCR?RAt__getattr__(((s/usr/lib64/python2.7/socket.pyRD�s	t
_socketobjectcBs�eZejZddgee�Zeeddd�Z
eeed�Z
ej
je
_d�Zejje_d�Zddd	�Zed
�dd�Zed
�dd�Zed�dd�ZRS(t_sockt__weakref__icCsX|dkr!t|||�}n||_x'tD]}t||t||��q1WdS(N(tNonet_realsocketRLt_delegate_methodstsetattrtgetattr(tselftfamilyttypetprotoRLtmethod((s/usr/lib64/python2.7/socket.pyt__init__�s
	
cCs=|�|_|jj}x|D]}||||�qWdS(N(RLRF(RSRDRPRQtdummyRW((s/usr/lib64/python2.7/socket.pytclose�s
cCs(|jj�\}}td|�|fS(NRL(RLtacceptRK(RSR	taddr((s/usr/lib64/python2.7/socket.pyR[�scCstd|j�S(sadup() -> socket object

        Return a new socket object connected to the same system resource.RL(RKRL(RS((s/usr/lib64/python2.7/socket.pytdup�stri����cCst|j||�S(s�makefile([mode[, bufsize]]) -> file object

        Return a regular file object corresponding to the socket.  The mode
        and bufsize arguments are as for the built-in open() function.(t_fileobjectRL(RStmodetbufsize((s/usr/lib64/python2.7/socket.pytmakefile�scCs
|jjS(N(RLRT(RS((s/usr/lib64/python2.7/socket.pyt<lambda>�R!tdocsthe socket familycCs
|jjS(N(RLRU(RS((s/usr/lib64/python2.7/socket.pyRc�R!sthe socket typecCs
|jjS(N(RLRV(RS((s/usr/lib64/python2.7/socket.pyRc�R!sthe socket protocolN(RGRHROt__doc__tlistRPRItAF_INETtSOCK_STREAMRNRXRDRQRZR[R]RbtpropertyRTRURV(((s/usr/lib64/python2.7/socket.pyRK�s			cGst|j|�|�S(N(RRRL(R(RSRE((s/usr/lib64/python2.7/socket.pytmeth�sR_c
Bs�eZdZdZdZddddddd	d
ddg
Zd
ded�Zd�Ze	edd�Z
d�Zd�Zd�Z
d�Zd�Zd�Zdd�Zdd�Zdd�Zd�Zd�ZRS(s-Faux file object attached to a socket object.i s<socket>R`Rat	softspaceRLt	_rbufsizet	_wbufsizet_rbuft_wbuft	_wbuf_lent_closetrbi����cCs�||_||_|dkr*|j}n||_t|_|dkrTd|_n$|dkro|j|_n	||_||_t�|_	g|_
d|_||_dS(Nii(
RLR`tdefault_bufsizeRatFalseRkRlRmRRnRoRpRq(RSR	R`RaRZ((s/usr/lib64/python2.7/socket.pyRX�s 								cCs
|jdkS(N(RLRN(RS((s/usr/lib64/python2.7/socket.pyt
_getclosedsRdsTrue if the file is closedcCsDz|jr|j�nWd|jr6|jj�nd|_XdS(N(RLtflushRqRZRN(RS((s/usr/lib64/python2.7/socket.pyRZs		cCsy|j�WnnXdS(N(RZ(RS((s/usr/lib64/python2.7/socket.pyt__del__!scCs�|jr�dj|j�}g|_d|_t|j|j�}t|�}d}t|�}z<x5||kr�|jj	||||!�||7}qfWWd||kr�||}~~|jj
|�t|�|_nXndS(NR!i(RotjoinRptmaxRlRstlent
memoryviewRLR5tappend(RStdatatbuffer_sizet	data_sizetwrite_offsettviewt	remainder((s/usr/lib64/python2.7/socket.pyRv(s"			
cCs
|jj�S(N(RLR/(RS((s/usr/lib64/python2.7/socket.pyR/<scCs�t|�}|sdS|jj|�|jt|�7_|jdks�|jdkred|ks�|jdkr�|j|jkr�|j�ndS(Niis
(tstrRoR|RpRzRmRv(RSR}((s/usr/lib64/python2.7/socket.pytwrite?s!cCsxtdtt|��}|jttt|��7_|jj|�|j	dksg|j|j	krt|j
�ndS(Ni(tfilterRNtmapR�RptsumRzRotextendRmRv(RSRftlines((s/usr/lib64/python2.7/socket.pyt
writelinesJsc
Cst|j|j�}|j}|jdd�|dkr�t�|_xitr�y|jj|�}Wn/t	k
r�}|j
dtkr�qIn�nX|s�Pn|j|�qIW|j
�S|j�}||kr|jd�|j|�}t�|_|jj|j��|St�|_x�tr	||}y|jj|�}Wn/t	k
r|}|j
dtkrvq%n�nX|s�Pnt|�}	|	|kr�|r�|S|	|kr�|j|�~Pn|	|ks�td||	f��|j|�||	7}~q%W|j
�SdS(Niisrecv(%d) returned %d bytes(RyRlRsRntseekRtTrueRLR>R&RERR�tgetvaluettelltreadRztAssertionError(
RStsizetrbufsizetbufR}tetbuf_lentrvtlefttn((s/usr/lib64/python2.7/socket.pyR�Ts\		

	

"

c
Cs|j}|jdd�|j�dkr�|jd�|j|�}|jd�sht|�|kr�t�|_|jj|j��|S~n|dkrV|j	dkrp|jd�|j�g}t�|_d}|jj}xwt
rby:x3|dkr*|d�}|sPn|j|�q�WWn/tk
r]}|jdtkrWq�n�nXPq�Wdj|�S|jdd�t�|_x�t
rKy|jj|j	�}Wn/tk
r�}|jdtkr�q�n�nX|s�Pn|jd�}|dkr;|d7}|j|| �|jj||�~Pn|j|�q�W|j�S|jdd�|j�}	|	|kr�|jd�|j|�}
t�|_|jj|j��|
St�|_x=t
ry|jj|j	�}Wn/tk
r}|jdtkrq�n�nX|s*Pn||	}|jdd|�}|dkr�|d7}|jj||�|	r�|j|| �Pq�|| Snt|�}||kr�|	r�|S||kr�|j|| �|jj||�Pn|j|�|	|7}	q�W|j�SdS(Niis
iR!(RnR�R�treadlinetendswithRzRR�R�RlRNRLR>R�R|R&RERRxtfindR�(
RSR�R�tblinetbuffersR}R>R�tnlR�R�R�R�((s/usr/lib64/python2.7/socket.pyR��s�	
!
	
	


	


icCsfd}g}xStra|j�}|s+Pn|j|�|t|�7}|r||krPqqW|S(Ni(R�R�R|Rz(RStsizehintttotalRftline((s/usr/lib64/python2.7/socket.pyt	readliness	
cCs|S(N((RS((s/usr/lib64/python2.7/socket.pyt__iter__scCs|j�}|st�n|S(N(R�t
StopIteration(RSR�((s/usr/lib64/python2.7/socket.pytnexts	(RGRHReRsR(RIRtRXRuRitclosedRZRwRvR/R�R�R�R�R�R�R�(((s/usr/lib64/python2.7/socket.pyR_�s(										
Fi	cCs|\}}d}x�t||dt�D]�}|\}}}	}
}d}yYt|||	�}|tk	rz|j|�n|r�|j|�n|j|�|SWq(tk
r�}
|
}|dk	r�|j	�q�q(Xq(W|dk	r�|�ntd��dS(scConnect to *address* and return the socket object.

    Convenience function.  Connect to *address* (a 2-tuple ``(host,
    port)``) and return the socket object.  Passing the optional
    *timeout* parameter will set the timeout on the socket instance
    before attempting to connect.  If no *timeout* is supplied, the
    global default timeout setting returned by :func:`getdefaulttimeout`
    is used.  If *source_address* is set it must be a tuple of (host, port)
    for the socket to bind as a source address before making the connection.
    A host of '' or port 0 tells the OS to use the default.
    is!getaddrinfo returns an empty listN(
RNtgetaddrinfoRhtsockett_GLOBAL_DEFAULT_TIMEOUTR7R,R-R&RZ(taddressttimeouttsource_addressthosttportterrtrestaftsocktypeRVt	canonnametsaR	t_((s/usr/lib64/python2.7/socket.pyRs(
	(<Ret_sockett	functoolsRttypesRt_ssltImportErrorRNRR
tsslerrorRRRRRRRRRRRRtostsysRt	cStringIORterrnoRRRRt__all__R�t_get_exports_listR�ROtplatformtlowert
startswithR R|Rt_socketmethodsR(RPtobjectRDRKRjt_mtpRGtmRQt
SocketTypeR_R�R(((s/usr/lib64/python2.7/socket.pyt<module>-s�

	L
$

















,	
	
�-	�
zfc@s�dZdefd��YZddddgZegZdZxTeD]LZyee�Z	Wne
k
rpqDnXes�e	Zneje	j�qDWes�e
de�nee�Zdd	d
�Z
dS(s�Generic interface to all dbm clones.

Instead of

        import dbm
        d = dbm.open(file, 'w', 0666)

use

        import anydbm
        d = anydbm.open(file, 'w')

The returned object is a dbhash, gdbm, dbm or dumbdbm object,
dependent on the type of database being opened (determined by whichdb
module) in the case of an existing dbm. If the dbm does not exist and
the create or new flag ('c' or 'n') was specified, the dbm type will
be determined by the availability of the modules (tested in the above
order).

It has the following interface (key and data are strings):

        d[key] = data   # store data at key (may override data at
                        # existing key)
        data = d[key]   # retrieve data at key (raise KeyError if no
                        # such key)
        del d[key]      # delete data stored at key (raises KeyError
                        # if no such key)
        flag = key in d   # true if the key exists
        list = d.keys() # return a list of all existing keys (slow!)

Future versions may change the order in which implementations are
tested for existence, and add interfaces to other dbm-like
implementations.
terrorcBseZRS((t__name__t
__module__(((s/usr/lib64/python2.7/anydbm.pyR$stdbhashtgdbmtdbmtdumbdbmsno dbm clone found; tried %stri�cCs�ddlm}||�}|dkrUd|ks@d|krIt}qytd�n$|dkrmtd�nt|�}|j|||�S(	s�Open or create database at path given by *file*.

    Optional argument *flag* can be 'r' (default) for read-only access, 'w'
    for read-write access of an existing database, 'c' for read-write access
    to a new or existing database, and 'n' for read-write access to a new
    database.

    Note: 'r' and 'w' fail if the database doesn't exist; 'c' creates it
    only if it doesn't exist; and 'n' always creates a new database.
    i����(twhichdbtctns#need 'c' or 'n' flag to open new dbtsdb type could not be determinedN(RtNonet_defaultmodRt
__import__topen(tfiletflagtmodeRtresulttmod((s/usr/lib64/python2.7/anydbm.pyR9s
	N(t__doc__t	ExceptionRt_namest_errorsRR
t_nameRt_modtImportErrortappendttupleR(((s/usr/lib64/python2.7/anydbm.pyt<module>"s 	

	�
zfc@sdZddd��YZdS(sSimple class to read IFF chunks.

An IFF chunk (used in formats such as AIFF, TIFF, RMFF (RealMedia File
Format)) has the following structure:

+----------------+
| ID (4 bytes)   |
+----------------+
| size (4 bytes) |
+----------------+
| data           |
| ...            |
+----------------+

The ID is a 4-byte string which identifies the type of chunk.

The size field (a 32-bit value, encoded using big-endian byte order)
gives the size of the whole chunk, including the 8-byte header.

Usually an IFF-type file consists of one or more chunks.  The proposed
usage of the Chunk class defined here is to instantiate an instance at
the start of each chunk and read from the instance until it reaches
the end, after which a new instance can be instantiated.  At the end
of the file, creating a new instance will fail with an EOFError
exception.

Usage:
while True:
    try:
        chunk = Chunk(file)
    except EOFError:
        break
    chunktype = chunk.getname()
    while True:
        data = chunk.read(nbytes)
        if not data:
            pass
        # do something with data

The interface is file-like.  The implemented methods are:
read, close, seek, tell, isatty.
Extra methods are: skip() (called by close, skips to the end of the chunk),
getname() (returns the name (ID) of the chunk)

The __init__ method has one required argument, a file-like object
(including a chunk instance), and one optional argument, a flag which
specifies whether or not chunks are aligned on 2-byte boundaries.  The
default is 1, i.e. aligned.
tChunkcBsheZeeed�Zd�Zd�Zd�Zd�Zdd�Z	d�Z
dd	�Zd
�ZRS(cCsddl}t|_||_|r-d}nd}||_|jd�|_t|j�dkrlt�ny*|j	|d|jd��d|_
Wn|jk
r�t�nX|r�|j
d|_
nd|_y|jj
�|_Wn ttfk
r
t|_n
Xt|_dS(Ni����t>t<itLii(tstructtFalsetclosedtaligntfiletreadt	chunknametlentEOFErrortunpackt	chunksizeterrort	size_readttelltoffsettAttributeErrortIOErrortseekabletTrue(tselfRRt	bigendiant
inclheaderRtstrflag((s/usr/lib64/python2.7/chunk.pyt__init__4s,					*
	
cCs|jS(s*Return the name (ID) of the current chunk.(R
(R((s/usr/lib64/python2.7/chunk.pytgetnameNscCs|jS(s%Return the size of the current chunk.(R(R((s/usr/lib64/python2.7/chunk.pytgetsizeRscCs+|js'z|j�Wdt|_XndS(N(RtskipR(R((s/usr/lib64/python2.7/chunk.pytcloseVs	cCs|jrtd�ntS(NsI/O operation on closed file(Rt
ValueErrorR(R((s/usr/lib64/python2.7/chunk.pytisatty]s	icCs�|jrtd�n|js*td�n|dkrF||j}n|dkrb||j}n|dks}||jkr�t�n|jj|j	|d�||_dS(s�Seek to specified position into the chunk.
        Default position is 0 (start of chunk).
        If the file is not seekable, this will result in an error.
        sI/O operation on closed filescannot seekiiiN(
RR RRRRtRuntimeErrorRtseekR(Rtpostwhence((s/usr/lib64/python2.7/chunk.pyR#bs			cCs|jrtd�n|jS(NsI/O operation on closed file(RR R(R((s/usr/lib64/python2.7/chunk.pyRus	i����cCs�|jrtd�n|j|jkr+dS|dkrJ|j|j}n||j|jkrs|j|j}n|jj|�}|jt|�|_|j|jkr�|jr�|jd@r�|jjd�}|jt|�|_n|S(s�Read at most size bytes from the chunk.
        If size is omitted or negative, read until the end
        of the chunk.
        sI/O operation on closed filetii(RR RRRR	RR(Rtsizetdatatdummy((s/usr/lib64/python2.7/chunk.pyR	zs 		
cCs�|jrtd�n|jr�y^|j|j}|jrT|jd@rT|d}n|jj|d�|j||_dSWq�tk
r�q�XnxM|j|jkr�t	d|j|j�}|j
|�}|s�t�q�q�WdS(s�Skip the rest of the chunk.
        If you are not interested in the contents of the chunk,
        this method should be called so that the file points to
        the start of the next chunk.
        sI/O operation on closed fileiNi (RR RRRRRR#RtminR	R(RtnR)((s/usr/lib64/python2.7/chunk.pyR�s"		

(
t__name__t
__module__RRRRRRR!R#RR	R(((s/usr/lib64/python2.7/chunk.pyR3s					N((t__doc__R(((s/usr/lib64/python2.7/chunk.pyt<module>1s�
zfc@s�dZddlZddlZddlZddlZdgZdZdZdZdZ	dZ
d	Zeeee	e
egZddd
��YZ
d�Zeejejd�Zd
�ZdS(s�Conversion pipeline templates.

The problem:
------------

Suppose you have some data that you want to convert to another format,
such as from GIF image format to PPM image format.  Maybe the
conversion involves several steps (e.g. piping it through compress or
uuencode).  Some of the conversion steps may require that their input
is a disk file, others may be able to read standard input; similar for
their output.  The input to the entire conversion may also be read
from a disk file or from an open file, and similar for its output.

The module lets you construct a pipeline template by sticking one or
more conversion steps together.  It will take care of creating and
removing temporary files if they are necessary to hold intermediate
data.  You can then use the template to do conversions from many
different sources to many different destinations.  The temporary
file names used are different each time the template is used.

The templates are objects so you can create templates for many
different conversion steps and store them in a dictionary, for
instance.


Directions:
-----------

To create a template:
    t = Template()

To add a conversion step to a template:
   t.append(command, kind)
where kind is a string of two characters: the first is '-' if the
command reads its standard input or 'f' if it requires a file; the
second likewise for the output. The command must be valid /bin/sh
syntax.  If input or output files are required, they are passed as
$IN and $OUT; otherwise, it must be  possible to use the command in
a pipeline.

To add a conversion step at the beginning:
   t.prepend(command, kind)

To convert a file to another file using a template:
  sts = t.copy(infile, outfile)
If infile or outfile are the empty string, standard input is read or
standard output is written, respectively.  The return value is the
exit status of the conversion pipeline.

To open a file for reading or writing through a conversion pipeline:
   fp = t.open(file, mode)
where mode is 'r' to read the file, or 'w' to write it -- just like
for the built-in function open() or for os.popen().

To create a new template object initialized to a given one:
   t2 = t.clone()
i����NtTemplatetffs-fsf-s--s.-s-.cBszeZdZd�Zd�Zd�Zd�Zd�Zd�Zd�Z	d�Z
d	�Zd
�Zd�Z
d�ZRS(
s'Class representing a pipeline template.cCsd|_|j�dS(s-Template() returns a fresh pipeline template.iN(t	debuggingtreset(tself((s/usr/lib64/python2.7/pipes.pyt__init__Ss	cCsd|jfS(s t.__repr__() implements repr(t).s<Template instance, steps=%r>(tsteps(R((s/usr/lib64/python2.7/pipes.pyt__repr__XscCs
g|_dS(s<t.reset() restores a pipeline template to its initial state.N(R(R((s/usr/lib64/python2.7/pipes.pyR\scCs&t�}|j|_|j|_|S(sbt.clone() returns a new pipeline template with identical
        initial state as the current one.(RRR(Rtt((s/usr/lib64/python2.7/pipes.pytclone`s	
cCs
||_dS(s(t.debug(flag) turns debugging on or off.N(R(Rtflag((s/usr/lib64/python2.7/pipes.pytdebughscCs�t|�td�k	r$td�n|tkrCtd|f�n|tkr[td�n|jr�|jddtkr�td�n|dd	kr�tjd
|�r�td�n|dd	kr�tjd|�r�td
�n|jj	||f�dS(s/t.append(cmd, kind) adds a new step at the end.ts%Template.append: cmd must be a stringsTemplate.append: bad kind %rs-Template.append: SOURCE can only be prependedi����is'Template.append: already ends with SINKitfs\$IN\bs#Template.append: missing $IN in cmds\$OUT\bs$Template.append: missing $OUT in cmdN(
ttypet	TypeErrort	stepkindst
ValueErrortSOURCERtSINKtretsearchtappend(Rtcmdtkind((s/usr/lib64/python2.7/pipes.pyRls&		 	#	#	cCst|�td�k	r$td�n|tkrCtd|f�n|tkr[td�n|jr�|jddtkr�td�n|ddkr�tjd	|�r�td
�n|ddkr�tjd|�r�td�n|jj	d||f�d
S(s2t.prepend(cmd, kind) adds a new step at the front.Rs&Template.prepend: cmd must be a stringsTemplate.prepend: bad kind %rs+Template.prepend: SINK can only be appendediis,Template.prepend: already begins with SOURCER
s\$IN\bs$Template.prepend: missing $IN in cmds\$OUT\bs%Template.prepend: missing $OUT in cmdN(
RRRRRRRRRtinsert(RRR((s/usr/lib64/python2.7/pipes.pytprepend�s&		 	#	#	cCsF|dkr|j|�S|dkr2|j|�Std|f�dS(s~t.open(file, rw) returns a pipe or file object open for
        reading or writing; the file is the other end of the pipeline.trtws,Template.open: rw must be 'r' or 'w', not %rN(topen_rtopen_wR(Rtfiletrw((s/usr/lib64/python2.7/pipes.pytopen�s

cCs[|jst|d�S|jddtkr9td�n|j|d�}tj|d�S(sit.open_r(file) and t.open_w(file) implement
        t.open(file, 'r') and t.open(file, 'w') respectively.Ri����is)Template.open_r: pipeline ends width SINKR(RR!RRtmakepipelinetostpopen(RRR((s/usr/lib64/python2.7/pipes.pyR�s	
	cCs[|jst|d�S|jddtkr9td�n|jd|�}tj|d�S(NRiis,Template.open_w: pipeline begins with SOURCER(RR!RRR"R#R$(RRR((s/usr/lib64/python2.7/pipes.pyR�s	
	cCstj|j||��S(N(R#tsystemR"(Rtinfiletoutfile((s/usr/lib64/python2.7/pipes.pytcopy�scCs4t||j|�}|jr0|GHd|}n|S(Nsset -x; (R"RR(RR&R'R((s/usr/lib64/python2.7/pipes.pyR"�s
	
(t__name__t
__module__t__doc__RRRR	RRRR!RRR(R"(((s/usr/lib64/python2.7/pipes.pyRPs								
				cCs�g}x-|D]%\}}|jd||dg�q
W|sX|jddddg�n|ddd!\}}|ddkr�|r�|jdddddg�n||dd<|ddd!\}}|ddkr�|r�|jddddg�n||dd<g}x�tdt|��D]�}||dd	}||d	}	|ddksm|	ddkr'tj�\}
}tj|
�|j|�|||dd<||d<q'q'Wx�|D]�}|\}
}}}|ddkrd
t|�d|}n|ddkr3dt|
�d|}n|dd
kr`|
r`|dt|
�}n|dd
kr�|r�|dt|�}n||d<q�W|dd}xq|dD]e}|dd!\}}|ddkrd|kr�d|d}n|d|}q�|d|}q�W|rd}x"|D]}|dt|�}q0Wdt|d�d}|d|d|}n|S(NRtcats--iiiR
i����isOUT=s; sIN=t-s <s >s{ s; }s |
s
srm -ft strap s; exits 1 2 3 13 14 15(	RRtrangetlenttempfiletmkstempR#tclosetquote(R&RR'tlistRRtgarbagetitlkindtrkindtfdttemptitemtinftoutftcmdlisttrmcmdRttrapcmd((s/usr/lib64/python2.7/pipes.pyR"�s` 

%

s
@%_-+=:,./cCsGx,|D]}|tkrPqqW|s+dS|Sd|jdd�dS(s2Return a shell-escaped version of the file string.s''t's'"'"'(t
_safecharstreplace(Rtc((s/usr/lib64/python2.7/pipes.pyR4s
((R+RR#R1tstringt__all__tFILEIN_FILEOUTt
STDIN_FILEOUTt
FILEIN_STDOUTtSTDIN_STDOUTRRRRR"t	frozensett
ascii_letterstdigitsRCR4(((s/usr/lib64/python2.7/pipes.pyt<module>9s"	q	H�
zfc@s�dZdZddlZddlZddlZddlZeed�sZed��ndddgZd	a	d	a
d
�Zd�Zde
fd��YZd
�Zddd��YZd�Zd�Zedkr�e�ndS(s�The Tab Nanny despises ambiguous indentation.  She knows no mercy.

tabnanny -- Detection of ambiguous indentation

For the time being this module is intended to be called as a script.
However it is possible to import it into an IDE and use the function
check() described below.

Warning: The API provided by this module is likely to change in future
releases; such changes may not be backward compatible.
t6i����NtNLs4tokenize.NL doesn't exist -- tokenize module too oldtchecktNannyNagtprocess_tokensicGsKd}x.|D]&}tjj|t|��d}q
Wtjjd�dS(Ntt s
(tsyststderrtwritetstr(targstseptarg((s /usr/lib64/python2.7/tabnanny.pyterrprint#s


cCs�y#tjtjdd�\}}Wn!tjk
rF}t|�dSXxF|D]>\}}|dkrstdan|dkrNtdaqNqNW|s�tdtjdd�dSx|D]}t|�q�WdS(Nitqvs-qs-vsUsage:is[-v] file_or_directory ...(tgetoptRtargvterrorRt
filename_onlytverboseR(toptsRtmsgtotaR
((s /usr/lib64/python2.7/tabnanny.pytmain*s#


cBs2eZdZd�Zd�Zd�Zd�ZRS(sk
    Raised by process_tokens() if detecting an ambiguous indent.
    Captured and handled in check().
    cCs!||||_|_|_dS(N(tlinenoRtline(tselfRRR((s /usr/lib64/python2.7/tabnanny.pyt__init__AscCs|jS(N(R(R((s /usr/lib64/python2.7/tabnanny.pyt
get_linenoCscCs|jS(N(R(R((s /usr/lib64/python2.7/tabnanny.pytget_msgEscCs|jS(N(R(R((s /usr/lib64/python2.7/tabnanny.pytget_lineGs(t__name__t
__module__t__doc__RRRR (((s /usr/lib64/python2.7/tabnanny.pyR<s
			c	Cs4tjj|�r�tjj|�r�tr:d|fGHntj|�}xq|D]i}tjj||�}tjj|�r�tjj|�s�tjj|d�dkrPt|�qPqPWdSyt	|�}Wn(t
k
r�}td||f�dSXtdkrd|GHnytt
j|j��Wn�t
jk
r[}td||f�dStk
r�}td	||f�dStk
r}|j�}|j�}tr�d
||fGHd|fGH|j�GHn>d|kr�d
|d
}ntr|GHn|G|Gt|�GHdSXtr0d|fGHndS(s~check(file_or_dir)

    If file_or_dir is a directory and not a symbolic link, then recursively
    descend the directory tree named by file_or_dir, checking all .py files
    along the way. If file_or_dir is an ordinary Python source file, it is
    checked for whitespace related problems. The diagnostic messages are
    written to standard output using the print statement.
    s%r: listing directoryi����s.pyNs%r: I/O Error: %sischecking %r ...s%r: Token Error: %ss%r: Indentation Error: %ss)%r: *** Line %d: trouble in tab city! ***soffending line: %rRt"s%r: Clean bill of health.(tostpathtisdirtislinkRtlistdirtjointnormcaseRtopentIOErrorRRttokenizetgenerate_tokenstreadlinet
TokenErrortIndentationErrorRRR RRtrepr(	tfiletnamestnametfullnametfRtnagtbadlineR((s /usr/lib64/python2.7/tabnanny.pyRJsR
%
t
WhitespacecBsSeZd\ZZd�Zd�Zd�Zd�Zd�Zd�Z	d�Z
RS(s 	c	Cs||_tjtj}}g}d}}}x�|jD]�}||krc|d}|d}q:||kr�|d}|d}|t|�kr�|dg|t|�d}n||d||<d}q:Pq:W||_||_t|�|f|_t|�dk|_	dS(Nii(
trawR;tStTtlentntntttupletnormt	is_simple(	RtwsR=R>tcounttbR@RAtch((s /usr/lib64/python2.7/tabnanny.pyR�s(	



"			cCs&|j\}}tt|�d|�S(Ni(RCtmaxR?(RRFttrailing((s /usr/lib64/python2.7/tabnanny.pytlongest_run_of_spaces�scCs^|j\}}d}x3t|t|��D]}|||||}q+W||||jS(Ni(RCtrangeR?RA(RttabsizeRFRJtilti((s /usr/lib64/python2.7/tabnanny.pytindent_level�s
cCs|j|jkS(N(RC(Rtother((s /usr/lib64/python2.7/tabnanny.pytequal�scCs�t|j�|j��d}g}xdtd|d�D]O}|j|�|j|�kr9|j||j|�|j|�f�q9q9W|S(Ni(RIRKRLRPtappend(RRQR@Rtts((s /usr/lib64/python2.7/tabnanny.pytnot_equal_witness�s	cCs�|j|jkrtS|jr8|jr8|j|jkSt|j�|j��d}x=td|d�D](}|j|�|j|�krktSqkWtS(Nii(	R@tFalseRDRARIRKRLRPtTrue(RRQR@RT((s /usr/lib64/python2.7/tabnanny.pytless�scCs�t|j�|j��d}g}xdtd|d�D]O}|j|�|j|�kr9|j||j|�|j|�f�q9q9W|S(Ni(RIRKRLRPRS(RRQR@RRT((s /usr/lib64/python2.7/tabnanny.pytnot_less_witness�s	(R!R"R=R>RRKRPRRRURXRY(((s /usr/lib64/python2.7/tabnanny.pyR;�s						cCsLtd�|�}d}t|�dkr7|d}n|ddj|�S(NcSst|d�S(Ni(R
(ttup((s /usr/lib64/python2.7/tabnanny.pyt<lambda>Rsat tab sizeitsRs, (tmapR?R*(twtfirststprefix((s /usr/lib64/python2.7/tabnanny.pytformat_witnesses
s

cCs�tj}tj}tj}tjtjf}td�g}d}xA|D]9\}}}	}
}||krsd}qI||kr�d}t|�}|dj|�s�|dj|�}
dt	|
�}t
|	d||��n|j|�qI||krd}|d=qI|rI||krId}t|�}|dj|�s�|dj
|�}
dt	|
�}t
|	d||��q�qIqIWdS(NRiii����sindent not greater e.g. sindent not equal e.g. (R.tINDENTtDEDENTtNEWLINEtCOMMENTRR;RXRYRaRRSRRRU(ttokensRbRcRdtJUNKtindentstcheck_equalttypettokentstarttendRtthisguytwitnessR((s /usr/lib64/python2.7/tabnanny.pyRs6				

t__main__((R#t__version__R%RRR.thasattrt
ValueErrort__all__RRRRt	ExceptionRRR;RaRR!(((s /usr/lib64/python2.7/tabnanny.pyt<module>
s&			:�		7#! /usr/bin/python2.7

"""A Python debugger."""

# (See pdb.doc for documentation.)

import sys
import linecache
import cmd
import bdb
from repr import Repr
import os
import re
import pprint
import traceback


class Restart(Exception):
    """Causes a debugger to be restarted for the debugged python program."""
    pass

# Create a custom safe Repr instance and increase its maxstring.
# The default of 30 truncates error messages too easily.
_repr = Repr()
_repr.maxstring = 200
_saferepr = _repr.repr

__all__ = ["run", "pm", "Pdb", "runeval", "runctx", "runcall", "set_trace",
           "post_mortem", "help"]

def find_function(funcname, filename):
    cre = re.compile(r'def\s+%s\s*[(]' % re.escape(funcname))
    try:
        fp = open(filename)
    except IOError:
        return None
    # consumer of this info expects the first line to be 1
    lineno = 1
    answer = None
    while 1:
        line = fp.readline()
        if line == '':
            break
        if cre.match(line):
            answer = funcname, filename, lineno
            break
        lineno = lineno + 1
    fp.close()
    return answer


# Interaction prompt line will separate file and call info from code
# text using value of line_prefix string.  A newline and arrow may
# be to your liking.  You can set it once pdb is imported using the
# command "pdb.line_prefix = '\n% '".
# line_prefix = ': '    # Use this to get the old situation back
line_prefix = '\n-> '   # Probably a better default

class Pdb(bdb.Bdb, cmd.Cmd):

    def __init__(self, completekey='tab', stdin=None, stdout=None, skip=None):
        bdb.Bdb.__init__(self, skip=skip)
        cmd.Cmd.__init__(self, completekey, stdin, stdout)
        if stdout:
            self.use_rawinput = 0
        self.prompt = '(Pdb) '
        self.aliases = {}
        self.mainpyfile = ''
        self._wait_for_mainpyfile = 0
        # Try to load readline if it exists
        try:
            import readline
        except ImportError:
            pass

        # Read $HOME/.pdbrc and ./.pdbrc
        self.rcLines = []
        if 'HOME' in os.environ:
            envHome = os.environ['HOME']
            try:
                rcFile = open(os.path.join(envHome, ".pdbrc"))
            except IOError:
                pass
            else:
                for line in rcFile.readlines():
                    self.rcLines.append(line)
                rcFile.close()
        try:
            rcFile = open(".pdbrc")
        except IOError:
            pass
        else:
            for line in rcFile.readlines():
                self.rcLines.append(line)
            rcFile.close()

        self.commands = {} # associates a command list to breakpoint numbers
        self.commands_doprompt = {} # for each bp num, tells if the prompt
                                    # must be disp. after execing the cmd list
        self.commands_silent = {} # for each bp num, tells if the stack trace
                                  # must be disp. after execing the cmd list
        self.commands_defining = False # True while in the process of defining
                                       # a command list
        self.commands_bnum = None # The breakpoint number for which we are
                                  # defining a list

    def reset(self):
        bdb.Bdb.reset(self)
        self.forget()

    def forget(self):
        self.lineno = None
        self.stack = []
        self.curindex = 0
        self.curframe = None

    def setup(self, f, t):
        self.forget()
        self.stack, self.curindex = self.get_stack(f, t)
        self.curframe = self.stack[self.curindex][0]
        # The f_locals dictionary is updated from the actual frame
        # locals whenever the .f_locals accessor is called, so we
        # cache it here to ensure that modifications are not overwritten.
        self.curframe_locals = self.curframe.f_locals
        self.execRcLines()

    # Can be executed earlier than 'setup' if desired
    def execRcLines(self):
        if self.rcLines:
            # Make local copy because of recursion
            rcLines = self.rcLines
            # executed only once
            self.rcLines = []
            for line in rcLines:
                line = line[:-1]
                if len(line) > 0 and line[0] != '#':
                    self.onecmd(line)

    # Override Bdb methods

    def user_call(self, frame, argument_list):
        """This method is called when there is the remote possibility
        that we ever need to stop in this function."""
        if self._wait_for_mainpyfile:
            return
        if self.stop_here(frame):
            print >>self.stdout, '--Call--'
            self.interaction(frame, None)

    def user_line(self, frame):
        """This function is called when we stop or break at this line."""
        if self._wait_for_mainpyfile:
            if (self.mainpyfile != self.canonic(frame.f_code.co_filename)
                or frame.f_lineno<= 0):
                return
            self._wait_for_mainpyfile = 0
        if self.bp_commands(frame):
            self.interaction(frame, None)

    def bp_commands(self,frame):
        """Call every command that was set for the current active breakpoint
        (if there is one).

        Returns True if the normal interaction function must be called,
        False otherwise."""
        # self.currentbp is set in bdb in Bdb.break_here if a breakpoint was hit
        if getattr(self, "currentbp", False) and \
               self.currentbp in self.commands:
            currentbp = self.currentbp
            self.currentbp = 0
            lastcmd_back = self.lastcmd
            self.setup(frame, None)
            for line in self.commands[currentbp]:
                self.onecmd(line)
            self.lastcmd = lastcmd_back
            if not self.commands_silent[currentbp]:
                self.print_stack_entry(self.stack[self.curindex])
            if self.commands_doprompt[currentbp]:
                self.cmdloop()
            self.forget()
            return
        return 1

    def user_return(self, frame, return_value):
        """This function is called when a return trap is set here."""
        if self._wait_for_mainpyfile:
            return
        frame.f_locals['__return__'] = return_value
        print >>self.stdout, '--Return--'
        self.interaction(frame, None)

    def user_exception(self, frame, exc_info):
        """This function is called if an exception occurs,
        but only if we are to stop at or just below this level."""
        if self._wait_for_mainpyfile:
            return
        exc_type, exc_value, exc_traceback = exc_info
        frame.f_locals['__exception__'] = exc_type, exc_value
        if type(exc_type) == type(''):
            exc_type_name = exc_type
        else: exc_type_name = exc_type.__name__
        print >>self.stdout, exc_type_name + ':', _saferepr(exc_value)
        self.interaction(frame, exc_traceback)

    # General interaction function

    def interaction(self, frame, traceback):
        self.setup(frame, traceback)
        self.print_stack_entry(self.stack[self.curindex])
        self.cmdloop()
        self.forget()

    def displayhook(self, obj):
        """Custom displayhook for the exec in default(), which prevents
        assignment of the _ variable in the builtins.
        """
        # reproduce the behavior of the standard displayhook, not printing None
        if obj is not None:
            print repr(obj)

    def default(self, line):
        if line[:1] == '!': line = line[1:]
        locals = self.curframe_locals
        globals = self.curframe.f_globals
        try:
            code = compile(line + '\n', '<stdin>', 'single')
            save_stdout = sys.stdout
            save_stdin = sys.stdin
            save_displayhook = sys.displayhook
            try:
                sys.stdin = self.stdin
                sys.stdout = self.stdout
                sys.displayhook = self.displayhook
                exec code in globals, locals
            finally:
                sys.stdout = save_stdout
                sys.stdin = save_stdin
                sys.displayhook = save_displayhook
        except:
            t, v = sys.exc_info()[:2]
            if type(t) == type(''):
                exc_type_name = t
            else: exc_type_name = t.__name__
            print >>self.stdout, '***', exc_type_name + ':', v

    def precmd(self, line):
        """Handle alias expansion and ';;' separator."""
        if not line.strip():
            return line
        args = line.split()
        while args[0] in self.aliases:
            line = self.aliases[args[0]]
            ii = 1
            for tmpArg in args[1:]:
                line = line.replace("%" + str(ii),
                                      tmpArg)
                ii = ii + 1
            line = line.replace("%*", ' '.join(args[1:]))
            args = line.split()
        # split into ';;' separated commands
        # unless it's an alias command
        if args[0] != 'alias':
            marker = line.find(';;')
            if marker >= 0:
                # queue up everything after marker
                next = line[marker+2:].lstrip()
                self.cmdqueue.append(next)
                line = line[:marker].rstrip()
        return line

    def onecmd(self, line):
        """Interpret the argument as though it had been typed in response
        to the prompt.

        Checks whether this line is typed at the normal prompt or in
        a breakpoint command list definition.
        """
        if not self.commands_defining:
            return cmd.Cmd.onecmd(self, line)
        else:
            return self.handle_command_def(line)

    def handle_command_def(self,line):
        """Handles one command line during command list definition."""
        cmd, arg, line = self.parseline(line)
        if not cmd:
            return
        if cmd == 'silent':
            self.commands_silent[self.commands_bnum] = True
            return # continue to handle other cmd def in the cmd list
        elif cmd == 'end':
            self.cmdqueue = []
            return 1 # end of cmd list
        cmdlist = self.commands[self.commands_bnum]
        if arg:
            cmdlist.append(cmd+' '+arg)
        else:
            cmdlist.append(cmd)
        # Determine if we must stop
        try:
            func = getattr(self, 'do_' + cmd)
        except AttributeError:
            func = self.default
        # one of the resuming commands
        if func.func_name in self.commands_resuming:
            self.commands_doprompt[self.commands_bnum] = False
            self.cmdqueue = []
            return 1
        return

    # Command definitions, called by cmdloop()
    # The argument is the remaining string on the command line
    # Return true to exit from the command loop

    do_h = cmd.Cmd.do_help

    def do_commands(self, arg):
        """Defines a list of commands associated to a breakpoint.

        Those commands will be executed whenever the breakpoint causes
        the program to stop execution."""
        if not arg:
            bnum = len(bdb.Breakpoint.bpbynumber)-1
        else:
            try:
                bnum = int(arg)
            except:
                print >>self.stdout, "Usage : commands [bnum]\n        ..." \
                                     "\n        end"
                return
        self.commands_bnum = bnum
        self.commands[bnum] = []
        self.commands_doprompt[bnum] = True
        self.commands_silent[bnum] = False
        prompt_back = self.prompt
        self.prompt = '(com) '
        self.commands_defining = True
        try:
            self.cmdloop()
        finally:
            self.commands_defining = False
            self.prompt = prompt_back

    def do_break(self, arg, temporary = 0):
        # break [ ([filename:]lineno | function) [, "condition"] ]
        if not arg:
            if self.breaks:  # There's at least one
                print >>self.stdout, "Num Type         Disp Enb   Where"
                for bp in bdb.Breakpoint.bpbynumber:
                    if bp:
                        bp.bpprint(self.stdout)
            return
        # parse arguments; comma has lowest precedence
        # and cannot occur in filename
        filename = None
        lineno = None
        cond = None
        comma = arg.find(',')
        if comma > 0:
            # parse stuff after comma: "condition"
            cond = arg[comma+1:].lstrip()
            arg = arg[:comma].rstrip()
        # parse stuff before comma: [filename:]lineno | function
        colon = arg.rfind(':')
        funcname = None
        if colon >= 0:
            filename = arg[:colon].rstrip()
            f = self.lookupmodule(filename)
            if not f:
                print >>self.stdout, '*** ', repr(filename),
                print >>self.stdout, 'not found from sys.path'
                return
            else:
                filename = f
            arg = arg[colon+1:].lstrip()
            try:
                lineno = int(arg)
            except ValueError, msg:
                print >>self.stdout, '*** Bad lineno:', arg
                return
        else:
            # no colon; can be lineno or function
            try:
                lineno = int(arg)
            except ValueError:
                try:
                    func = eval(arg,
                                self.curframe.f_globals,
                                self.curframe_locals)
                except:
                    func = arg
                try:
                    if hasattr(func, 'im_func'):
                        func = func.im_func
                    code = func.func_code
                    #use co_name to identify the bkpt (function names
                    #could be aliased, but co_name is invariant)
                    funcname = code.co_name
                    lineno = code.co_firstlineno
                    filename = code.co_filename
                except:
                    # last thing to try
                    (ok, filename, ln) = self.lineinfo(arg)
                    if not ok:
                        print >>self.stdout, '*** The specified object',
                        print >>self.stdout, repr(arg),
                        print >>self.stdout, 'is not a function'
                        print >>self.stdout, 'or was not found along sys.path.'
                        return
                    funcname = ok # ok contains a function name
                    lineno = int(ln)
        if not filename:
            filename = self.defaultFile()
        # Check for reasonable breakpoint
        line = self.checkline(filename, lineno)
        if line:
            # now set the break point
            err = self.set_break(filename, line, temporary, cond, funcname)
            if err: print >>self.stdout, '***', err
            else:
                bp = self.get_breaks(filename, line)[-1]
                print >>self.stdout, "Breakpoint %d at %s:%d" % (bp.number,
                                                                 bp.file,
                                                                 bp.line)

    # To be overridden in derived debuggers
    def defaultFile(self):
        """Produce a reasonable default."""
        filename = self.curframe.f_code.co_filename
        if filename == '<string>' and self.mainpyfile:
            filename = self.mainpyfile
        return filename

    do_b = do_break

    def do_tbreak(self, arg):
        self.do_break(arg, 1)

    def lineinfo(self, identifier):
        failed = (None, None, None)
        # Input is identifier, may be in single quotes
        idstring = identifier.split("'")
        if len(idstring) == 1:
            # not in single quotes
            id = idstring[0].strip()
        elif len(idstring) == 3:
            # quoted
            id = idstring[1].strip()
        else:
            return failed
        if id == '': return failed
        parts = id.split('.')
        # Protection for derived debuggers
        if parts[0] == 'self':
            del parts[0]
            if len(parts) == 0:
                return failed
        # Best first guess at file to look at
        fname = self.defaultFile()
        if len(parts) == 1:
            item = parts[0]
        else:
            # More than one part.
            # First is module, second is method/class
            f = self.lookupmodule(parts[0])
            if f:
                fname = f
            item = parts[1]
        answer = find_function(item, fname)
        return answer or failed

    def checkline(self, filename, lineno):
        """Check whether specified line seems to be executable.

        Return `lineno` if it is, 0 if not (e.g. a docstring, comment, blank
        line or EOF). Warning: testing is not comprehensive.
        """
        # this method should be callable before starting debugging, so default
        # to "no globals" if there is no current frame
        globs = self.curframe.f_globals if hasattr(self, 'curframe') else None
        line = linecache.getline(filename, lineno, globs)
        if not line:
            print >>self.stdout, 'End of file'
            return 0
        line = line.strip()
        # Don't allow setting breakpoint at a blank line
        if (not line or (line[0] == '#') or
             (line[:3] == '"""') or line[:3] == "'''"):
            print >>self.stdout, '*** Blank or comment'
            return 0
        return lineno

    def do_enable(self, arg):
        args = arg.split()
        for i in args:
            try:
                i = int(i)
            except ValueError:
                print >>self.stdout, 'Breakpoint index %r is not a number' % i
                continue

            if not (0 <= i < len(bdb.Breakpoint.bpbynumber)):
                print >>self.stdout, 'No breakpoint numbered', i
                continue

            bp = bdb.Breakpoint.bpbynumber[i]
            if bp:
                bp.enable()

    def do_disable(self, arg):
        args = arg.split()
        for i in args:
            try:
                i = int(i)
            except ValueError:
                print >>self.stdout, 'Breakpoint index %r is not a number' % i
                continue

            if not (0 <= i < len(bdb.Breakpoint.bpbynumber)):
                print >>self.stdout, 'No breakpoint numbered', i
                continue

            bp = bdb.Breakpoint.bpbynumber[i]
            if bp:
                bp.disable()

    def do_condition(self, arg):
        # arg is breakpoint number and condition
        args = arg.split(' ', 1)
        try:
            bpnum = int(args[0].strip())
        except ValueError:
            # something went wrong
            print >>self.stdout, \
                'Breakpoint index %r is not a number' % args[0]
            return
        try:
            cond = args[1]
        except:
            cond = None
        try:
            bp = bdb.Breakpoint.bpbynumber[bpnum]
        except IndexError:
            print >>self.stdout, 'Breakpoint index %r is not valid' % args[0]
            return
        if bp:
            bp.cond = cond
            if not cond:
                print >>self.stdout, 'Breakpoint', bpnum,
                print >>self.stdout, 'is now unconditional.'

    def do_ignore(self,arg):
        """arg is bp number followed by ignore count."""
        args = arg.split()
        try:
            bpnum = int(args[0].strip())
        except ValueError:
            # something went wrong
            print >>self.stdout, \
                'Breakpoint index %r is not a number' % args[0]
            return
        try:
            count = int(args[1].strip())
        except:
            count = 0
        try:
            bp = bdb.Breakpoint.bpbynumber[bpnum]
        except IndexError:
            print >>self.stdout, 'Breakpoint index %r is not valid' % args[0]
            return
        if bp:
            bp.ignore = count
            if count > 0:
                reply = 'Will ignore next '
                if count > 1:
                    reply = reply + '%d crossings' % count
                else:
                    reply = reply + '1 crossing'
                print >>self.stdout, reply + ' of breakpoint %d.' % bpnum
            else:
                print >>self.stdout, 'Will stop next time breakpoint',
                print >>self.stdout, bpnum, 'is reached.'

    def do_clear(self, arg):
        """Three possibilities, tried in this order:
        clear -> clear all breaks, ask for confirmation
        clear file:lineno -> clear all breaks at file:lineno
        clear bpno bpno ... -> clear breakpoints by number"""
        if not arg:
            try:
                reply = raw_input('Clear all breaks? ')
            except EOFError:
                reply = 'no'
            reply = reply.strip().lower()
            if reply in ('y', 'yes'):
                self.clear_all_breaks()
            return
        if ':' in arg:
            # Make sure it works for "clear C:\foo\bar.py:12"
            i = arg.rfind(':')
            filename = arg[:i]
            arg = arg[i+1:]
            try:
                lineno = int(arg)
            except ValueError:
                err = "Invalid line number (%s)" % arg
            else:
                err = self.clear_break(filename, lineno)
            if err: print >>self.stdout, '***', err
            return
        numberlist = arg.split()
        for i in numberlist:
            try:
                i = int(i)
            except ValueError:
                print >>self.stdout, 'Breakpoint index %r is not a number' % i
                continue

            if not (0 <= i < len(bdb.Breakpoint.bpbynumber)):
                print >>self.stdout, 'No breakpoint numbered', i
                continue
            err = self.clear_bpbynumber(i)
            if err:
                print >>self.stdout, '***', err
            else:
                print >>self.stdout, 'Deleted breakpoint', i
    do_cl = do_clear # 'c' is already an abbreviation for 'continue'

    def do_where(self, arg):
        self.print_stack_trace()
    do_w = do_where
    do_bt = do_where

    def do_up(self, arg):
        if self.curindex == 0:
            print >>self.stdout, '*** Oldest frame'
        else:
            self.curindex = self.curindex - 1
            self.curframe = self.stack[self.curindex][0]
            self.curframe_locals = self.curframe.f_locals
            self.print_stack_entry(self.stack[self.curindex])
            self.lineno = None
    do_u = do_up

    def do_down(self, arg):
        if self.curindex + 1 == len(self.stack):
            print >>self.stdout, '*** Newest frame'
        else:
            self.curindex = self.curindex + 1
            self.curframe = self.stack[self.curindex][0]
            self.curframe_locals = self.curframe.f_locals
            self.print_stack_entry(self.stack[self.curindex])
            self.lineno = None
    do_d = do_down

    def do_until(self, arg):
        self.set_until(self.curframe)
        return 1
    do_unt = do_until

    def do_step(self, arg):
        self.set_step()
        return 1
    do_s = do_step

    def do_next(self, arg):
        self.set_next(self.curframe)
        return 1
    do_n = do_next

    def do_run(self, arg):
        """Restart program by raising an exception to be caught in the main
        debugger loop.  If arguments were given, set them in sys.argv."""
        if arg:
            import shlex
            argv0 = sys.argv[0:1]
            sys.argv = shlex.split(arg)
            sys.argv[:0] = argv0
        raise Restart

    do_restart = do_run

    def do_return(self, arg):
        self.set_return(self.curframe)
        return 1
    do_r = do_return

    def do_continue(self, arg):
        self.set_continue()
        return 1
    do_c = do_cont = do_continue

    def do_jump(self, arg):
        if self.curindex + 1 != len(self.stack):
            print >>self.stdout, "*** You can only jump within the bottom frame"
            return
        try:
            arg = int(arg)
        except ValueError:
            print >>self.stdout, "*** The 'jump' command requires a line number."
        else:
            try:
                # Do the jump, fix up our copy of the stack, and display the
                # new position
                self.curframe.f_lineno = arg
                self.stack[self.curindex] = self.stack[self.curindex][0], arg
                self.print_stack_entry(self.stack[self.curindex])
            except ValueError, e:
                print >>self.stdout, '*** Jump failed:', e
    do_j = do_jump

    def do_debug(self, arg):
        sys.settrace(None)
        globals = self.curframe.f_globals
        locals = self.curframe_locals
        p = Pdb(self.completekey, self.stdin, self.stdout)
        p.prompt = "(%s) " % self.prompt.strip()
        print >>self.stdout, "ENTERING RECURSIVE DEBUGGER"
        sys.call_tracing(p.run, (arg, globals, locals))
        print >>self.stdout, "LEAVING RECURSIVE DEBUGGER"
        sys.settrace(self.trace_dispatch)
        self.lastcmd = p.lastcmd

    def do_quit(self, arg):
        self._user_requested_quit = 1
        self.set_quit()
        return 1

    do_q = do_quit
    do_exit = do_quit

    def do_EOF(self, arg):
        print >>self.stdout
        self._user_requested_quit = 1
        self.set_quit()
        return 1

    def do_args(self, arg):
        co = self.curframe.f_code
        dict = self.curframe_locals
        n = co.co_argcount
        if co.co_flags & 4: n = n+1
        if co.co_flags & 8: n = n+1
        for i in range(n):
            name = co.co_varnames[i]
            print >>self.stdout, name, '=',
            if name in dict: print >>self.stdout, dict[name]
            else: print >>self.stdout, "*** undefined ***"
    do_a = do_args

    def do_retval(self, arg):
        if '__return__' in self.curframe_locals:
            print >>self.stdout, self.curframe_locals['__return__']
        else:
            print >>self.stdout, '*** Not yet returned!'
    do_rv = do_retval

    def _getval(self, arg):
        try:
            return eval(arg, self.curframe.f_globals,
                        self.curframe_locals)
        except:
            t, v = sys.exc_info()[:2]
            if isinstance(t, str):
                exc_type_name = t
            else: exc_type_name = t.__name__
            print >>self.stdout, '***', exc_type_name + ':', repr(v)
            raise

    def do_p(self, arg):
        try:
            print >>self.stdout, repr(self._getval(arg))
        except:
            pass

    def do_pp(self, arg):
        try:
            pprint.pprint(self._getval(arg), self.stdout)
        except:
            pass

    def do_list(self, arg):
        self.lastcmd = 'list'
        last = None
        if arg:
            try:
                x = eval(arg, {}, {})
                if type(x) == type(()):
                    first, last = x
                    first = int(first)
                    last = int(last)
                    if last < first:
                        # Assume it's a count
                        last = first + last
                else:
                    first = max(1, int(x) - 5)
            except:
                print >>self.stdout, '*** Error in argument:', repr(arg)
                return
        elif self.lineno is None:
            first = max(1, self.curframe.f_lineno - 5)
        else:
            first = self.lineno + 1
        if last is None:
            last = first + 10
        filename = self.curframe.f_code.co_filename
        breaklist = self.get_file_breaks(filename)
        try:
            for lineno in range(first, last+1):
                line = linecache.getline(filename, lineno,
                                         self.curframe.f_globals)
                if not line:
                    print >>self.stdout, '[EOF]'
                    break
                else:
                    s = repr(lineno).rjust(3)
                    if len(s) < 4: s = s + ' '
                    if lineno in breaklist: s = s + 'B'
                    else: s = s + ' '
                    if lineno == self.curframe.f_lineno:
                        s = s + '->'
                    print >>self.stdout, s + '\t' + line,
                    self.lineno = lineno
        except KeyboardInterrupt:
            pass
    do_l = do_list

    def do_whatis(self, arg):
        try:
            value = eval(arg, self.curframe.f_globals,
                            self.curframe_locals)
        except:
            t, v = sys.exc_info()[:2]
            if type(t) == type(''):
                exc_type_name = t
            else: exc_type_name = t.__name__
            print >>self.stdout, '***', exc_type_name + ':', repr(v)
            return
        code = None
        # Is it a function?
        try: code = value.func_code
        except: pass
        if code:
            print >>self.stdout, 'Function', code.co_name
            return
        # Is it an instance method?
        try: code = value.im_func.func_code
        except: pass
        if code:
            print >>self.stdout, 'Method', code.co_name
            return
        # None of the above...
        print >>self.stdout, type(value)

    def do_alias(self, arg):
        args = arg.split()
        if len(args) == 0:
            keys = self.aliases.keys()
            keys.sort()
            for alias in keys:
                print >>self.stdout, "%s = %s" % (alias, self.aliases[alias])
            return
        if args[0] in self.aliases and len(args) == 1:
            print >>self.stdout, "%s = %s" % (args[0], self.aliases[args[0]])
        else:
            self.aliases[args[0]] = ' '.join(args[1:])

    def do_unalias(self, arg):
        args = arg.split()
        if len(args) == 0: return
        if args[0] in self.aliases:
            del self.aliases[args[0]]

    #list of all the commands making the program resume execution.
    commands_resuming = ['do_continue', 'do_step', 'do_next', 'do_return',
                         'do_quit', 'do_jump']

    # Print a traceback starting at the top stack frame.
    # The most recently entered frame is printed last;
    # this is different from dbx and gdb, but consistent with
    # the Python interpreter's stack trace.
    # It is also consistent with the up/down commands (which are
    # compatible with dbx and gdb: up moves towards 'main()'
    # and down moves towards the most recent stack frame).

    def print_stack_trace(self):
        try:
            for frame_lineno in self.stack:
                self.print_stack_entry(frame_lineno)
        except KeyboardInterrupt:
            pass

    def print_stack_entry(self, frame_lineno, prompt_prefix=line_prefix):
        frame, lineno = frame_lineno
        if frame is self.curframe:
            print >>self.stdout, '>',
        else:
            print >>self.stdout, ' ',
        print >>self.stdout, self.format_stack_entry(frame_lineno,
                                                     prompt_prefix)


    # Help methods (derived from pdb.doc)

    def help_help(self):
        self.help_h()

    def help_h(self):
        print >>self.stdout, """h(elp)
Without argument, print the list of available commands.
With a command name as argument, print help about that command
"help pdb" pipes the full documentation file to the $PAGER
"help exec" gives help on the ! command"""

    def help_where(self):
        self.help_w()

    def help_w(self):
        print >>self.stdout, """w(here)
Print a stack trace, with the most recent frame at the bottom.
An arrow indicates the "current frame", which determines the
context of most commands.  'bt' is an alias for this command."""

    help_bt = help_w

    def help_down(self):
        self.help_d()

    def help_d(self):
        print >>self.stdout, """d(own)
Move the current frame one level down in the stack trace
(to a newer frame)."""

    def help_up(self):
        self.help_u()

    def help_u(self):
        print >>self.stdout, """u(p)
Move the current frame one level up in the stack trace
(to an older frame)."""

    def help_break(self):
        self.help_b()

    def help_b(self):
        print >>self.stdout, """b(reak) ([file:]lineno | function) [, condition]
With a line number argument, set a break there in the current
file.  With a function name, set a break at first executable line
of that function.  Without argument, list all breaks.  If a second
argument is present, it is a string specifying an expression
which must evaluate to true before the breakpoint is honored.

The line number may be prefixed with a filename and a colon,
to specify a breakpoint in another file (probably one that
hasn't been loaded yet).  The file is searched for on sys.path;
the .py suffix may be omitted."""

    def help_clear(self):
        self.help_cl()

    def help_cl(self):
        print >>self.stdout, "cl(ear) filename:lineno"
        print >>self.stdout, """cl(ear) [bpnumber [bpnumber...]]
With a space separated list of breakpoint numbers, clear
those breakpoints.  Without argument, clear all breaks (but
first ask confirmation).  With a filename:lineno argument,
clear all breaks at that line in that file.

Note that the argument is different from previous versions of
the debugger (in python distributions 1.5.1 and before) where
a linenumber was used instead of either filename:lineno or
breakpoint numbers."""

    def help_tbreak(self):
        print >>self.stdout, """tbreak  same arguments as break, but breakpoint
is removed when first hit."""

    def help_enable(self):
        print >>self.stdout, """enable bpnumber [bpnumber ...]
Enables the breakpoints given as a space separated list of
bp numbers."""

    def help_disable(self):
        print >>self.stdout, """disable bpnumber [bpnumber ...]
Disables the breakpoints given as a space separated list of
bp numbers."""

    def help_ignore(self):
        print >>self.stdout, """ignore bpnumber count
Sets the ignore count for the given breakpoint number.  A breakpoint
becomes active when the ignore count is zero.  When non-zero, the
count is decremented each time the breakpoint is reached and the
breakpoint is not disabled and any associated condition evaluates
to true."""

    def help_condition(self):
        print >>self.stdout, """condition bpnumber str_condition
str_condition is a string specifying an expression which
must evaluate to true before the breakpoint is honored.
If str_condition is absent, any existing condition is removed;
i.e., the breakpoint is made unconditional."""

    def help_step(self):
        self.help_s()

    def help_s(self):
        print >>self.stdout, """s(tep)
Execute the current line, stop at the first possible occasion
(either in a function that is called or in the current function)."""

    def help_until(self):
        self.help_unt()

    def help_unt(self):
        print """unt(il)
Continue execution until the line with a number greater than the current
one is reached or until the current frame returns"""

    def help_next(self):
        self.help_n()

    def help_n(self):
        print >>self.stdout, """n(ext)
Continue execution until the next line in the current function
is reached or it returns."""

    def help_return(self):
        self.help_r()

    def help_r(self):
        print >>self.stdout, """r(eturn)
Continue execution until the current function returns."""

    def help_continue(self):
        self.help_c()

    def help_cont(self):
        self.help_c()

    def help_c(self):
        print >>self.stdout, """c(ont(inue))
Continue execution, only stop when a breakpoint is encountered."""

    def help_jump(self):
        self.help_j()

    def help_j(self):
        print >>self.stdout, """j(ump) lineno
Set the next line that will be executed."""

    def help_debug(self):
        print >>self.stdout, """debug code
Enter a recursive debugger that steps through the code argument
(which is an arbitrary expression or statement to be executed
in the current environment)."""

    def help_list(self):
        self.help_l()

    def help_l(self):
        print >>self.stdout, """l(ist) [first [,last]]
List source code for the current file.
Without arguments, list 11 lines around the current line
or continue the previous listing.
With one argument, list 11 lines starting at that line.
With two arguments, list the given range;
if the second argument is less than the first, it is a count."""

    def help_args(self):
        self.help_a()

    def help_a(self):
        print >>self.stdout, """a(rgs)
Print the arguments of the current function."""

    def help_p(self):
        print >>self.stdout, """p expression
Print the value of the expression."""

    def help_pp(self):
        print >>self.stdout, """pp expression
Pretty-print the value of the expression."""

    def help_exec(self):
        print >>self.stdout, """(!) statement
Execute the (one-line) statement in the context of
the current stack frame.
The exclamation point can be omitted unless the first word
of the statement resembles a debugger command.
To assign to a global variable you must always prefix the
command with a 'global' command, e.g.:
(Pdb) global list_options; list_options = ['-l']
(Pdb)"""

    def help_run(self):
        print """run [args...]
Restart the debugged python program. If a string is supplied, it is
split with "shlex" and the result is used as the new sys.argv.
History, breakpoints, actions and debugger options are preserved.
"restart" is an alias for "run"."""

    help_restart = help_run

    def help_quit(self):
        self.help_q()

    def help_q(self):
        print >>self.stdout, """q(uit) or exit - Quit from the debugger.
The program being executed is aborted."""

    help_exit = help_q

    def help_whatis(self):
        print >>self.stdout, """whatis arg
Prints the type of the argument."""

    def help_EOF(self):
        print >>self.stdout, """EOF
Handles the receipt of EOF as a command."""

    def help_alias(self):
        print >>self.stdout, """alias [name [command [parameter parameter ...]]]
Creates an alias called 'name' the executes 'command'.  The command
must *not* be enclosed in quotes.  Replaceable parameters are
indicated by %1, %2, and so on, while %* is replaced by all the
parameters.  If no command is given, the current alias for name
is shown. If no name is given, all aliases are listed.

Aliases may be nested and can contain anything that can be
legally typed at the pdb prompt.  Note!  You *can* override
internal pdb commands with aliases!  Those internal commands
are then hidden until the alias is removed.  Aliasing is recursively
applied to the first word of the command line; all other words
in the line are left alone.

Some useful aliases (especially when placed in the .pdbrc file) are:

#Print instance variables (usage "pi classInst")
alias pi for k in %1.__dict__.keys(): print "%1.",k,"=",%1.__dict__[k]

#Print instance variables in self
alias ps pi self
"""

    def help_unalias(self):
        print >>self.stdout, """unalias name
Deletes the specified alias."""

    def help_commands(self):
        print >>self.stdout, """commands [bpnumber]
(com) ...
(com) end
(Pdb)

Specify a list of commands for breakpoint number bpnumber.  The
commands themselves appear on the following lines.  Type a line
containing just 'end' to terminate the commands.

To remove all commands from a breakpoint, type commands and
follow it immediately with  end; that is, give no commands.

With no bpnumber argument, commands refers to the last
breakpoint set.

You can use breakpoint commands to start your program up again.
Simply use the continue command, or step, or any other
command that resumes execution.

Specifying any command resuming execution (currently continue,
step, next, return, jump, quit and their abbreviations) terminates
the command list (as if that command was immediately followed by end).
This is because any time you resume execution
(even with a simple next or step), you may encounter
another breakpoint--which could have its own command list, leading to
ambiguities about which list to execute.

   If you use the 'silent' command in the command list, the
usual message about stopping at a breakpoint is not printed.  This may
be desirable for breakpoints that are to print a specific message and
then continue.  If none of the other commands print anything, you
see no sign that the breakpoint was reached.
"""

    def help_pdb(self):
        help()

    def lookupmodule(self, filename):
        """Helper function for break/clear parsing -- may be overridden.

        lookupmodule() translates (possibly incomplete) file or module name
        into an absolute file name.
        """
        if os.path.isabs(filename) and  os.path.exists(filename):
            return filename
        f = os.path.join(sys.path[0], filename)
        if  os.path.exists(f) and self.canonic(f) == self.mainpyfile:
            return f
        root, ext = os.path.splitext(filename)
        if ext == '':
            filename = filename + '.py'
        if os.path.isabs(filename):
            return filename
        for dirname in sys.path:
            while os.path.islink(dirname):
                dirname = os.readlink(dirname)
            fullname = os.path.join(dirname, filename)
            if os.path.exists(fullname):
                return fullname
        return None

    def _runscript(self, filename):
        # The script has to run in __main__ namespace (or imports from
        # __main__ will break).
        #
        # So we clear up the __main__ and set several special variables
        # (this gets rid of pdb's globals and cleans old variables on restarts).
        import __main__
        __main__.__dict__.clear()
        __main__.__dict__.update({"__name__"    : "__main__",
                                  "__file__"    : filename,
                                  "__builtins__": __builtins__,
                                 })

        # When bdb sets tracing, a number of call and line events happens
        # BEFORE debugger even reaches user's code (and the exact sequence of
        # events depends on python version). So we take special measures to
        # avoid stopping before we reach the main script (see user_line and
        # user_call for details).
        self._wait_for_mainpyfile = 1
        self.mainpyfile = self.canonic(filename)
        self._user_requested_quit = 0
        statement = 'execfile(%r)' % filename
        self.run(statement)

# Simplified interface

def run(statement, globals=None, locals=None):
    Pdb().run(statement, globals, locals)

def runeval(expression, globals=None, locals=None):
    return Pdb().runeval(expression, globals, locals)

def runctx(statement, globals, locals):
    # B/W compatibility
    run(statement, globals, locals)

def runcall(*args, **kwds):
    return Pdb().runcall(*args, **kwds)

def set_trace():
    Pdb().set_trace(sys._getframe().f_back)

# Post-Mortem interface

def post_mortem(t=None):
    # handling the default
    if t is None:
        # sys.exc_info() returns (type, value, traceback) if an exception is
        # being handled, otherwise it returns None
        t = sys.exc_info()[2]
        if t is None:
            raise ValueError("A valid traceback must be passed if no "
                                               "exception is being handled")

    p = Pdb()
    p.reset()
    p.interaction(None, t)

def pm():
    post_mortem(sys.last_traceback)


# Main program for testing

TESTCMD = 'import x; x.main()'

def test():
    run(TESTCMD)

# print help
def help():
    for dirname in sys.path:
        fullname = os.path.join(dirname, 'pdb.doc')
        if os.path.exists(fullname):
            sts = os.system('${PAGER-more} '+fullname)
            if sts: print '*** Pager exit status:', sts
            break
    else:
        print 'Sorry, can\'t find the help file "pdb.doc"',
        print 'along the Python search path'

def main():
    if not sys.argv[1:] or sys.argv[1] in ("--help", "-h"):
        print "usage: pdb.py scriptfile [arg] ..."
        sys.exit(2)

    mainpyfile =  sys.argv[1]     # Get script filename
    if not os.path.exists(mainpyfile):
        print 'Error:', mainpyfile, 'does not exist'
        sys.exit(1)

    del sys.argv[0]         # Hide "pdb.py" from argument list

    # Replace pdb's dir with script's dir in front of module search path.
    sys.path[0] = os.path.dirname(mainpyfile)

    # Note on saving/restoring sys.argv: it's a good idea when sys.argv was
    # modified by the script being debugged. It's a bad idea when it was
    # changed by the user from the command line. There is a "restart" command
    # which allows explicit specification of command line arguments.
    pdb = Pdb()
    while True:
        try:
            pdb._runscript(mainpyfile)
            if pdb._user_requested_quit:
                break
            print "The program finished and will be restarted"
        except Restart:
            print "Restarting", mainpyfile, "with arguments:"
            print "\t" + " ".join(sys.argv[1:])
        except SystemExit:
            # In most cases SystemExit does not warrant a post-mortem session.
            print "The program exited via sys.exit(). Exit status: ",
            print sys.exc_info()[1]
        except SyntaxError:
            traceback.print_exc()
            sys.exit(1)
        except:
            traceback.print_exc()
            print "Uncaught exception. Entering post mortem debugging"
            print "Running 'cont' or 'step' will restart the program"
            t = sys.exc_info()[2]
            pdb.interaction(None, t)
            print "Post mortem debugger finished. The " + mainpyfile + \
                  " will be restarted"


# When invoked as main program, invoke the debugger on a script
if __name__ == '__main__':
    import pdb
    pdb.main()
"""A more or less complete user-defined wrapper around list objects."""

import collections

class UserList(collections.MutableSequence):
    def __init__(self, initlist=None):
        self.data = []
        if initlist is not None:
            # XXX should this accept an arbitrary sequence?
            if type(initlist) == type(self.data):
                self.data[:] = initlist
            elif isinstance(initlist, UserList):
                self.data[:] = initlist.data[:]
            else:
                self.data = list(initlist)
    def __repr__(self): return repr(self.data)
    def __lt__(self, other): return self.data <  self.__cast(other)
    def __le__(self, other): return self.data <= self.__cast(other)
    def __eq__(self, other): return self.data == self.__cast(other)
    def __ne__(self, other): return self.data != self.__cast(other)
    def __gt__(self, other): return self.data >  self.__cast(other)
    def __ge__(self, other): return self.data >= self.__cast(other)
    def __cast(self, other):
        if isinstance(other, UserList): return other.data
        else: return other
    def __cmp__(self, other):
        return cmp(self.data, self.__cast(other))
    __hash__ = None # Mutable sequence, so not hashable
    def __contains__(self, item): return item in self.data
    def __len__(self): return len(self.data)
    def __getitem__(self, i): return self.data[i]
    def __setitem__(self, i, item): self.data[i] = item
    def __delitem__(self, i): del self.data[i]
    def __getslice__(self, i, j):
        i = max(i, 0); j = max(j, 0)
        return self.__class__(self.data[i:j])
    def __setslice__(self, i, j, other):
        i = max(i, 0); j = max(j, 0)
        if isinstance(other, UserList):
            self.data[i:j] = other.data
        elif isinstance(other, type(self.data)):
            self.data[i:j] = other
        else:
            self.data[i:j] = list(other)
    def __delslice__(self, i, j):
        i = max(i, 0); j = max(j, 0)
        del self.data[i:j]
    def __add__(self, other):
        if isinstance(other, UserList):
            return self.__class__(self.data + other.data)
        elif isinstance(other, type(self.data)):
            return self.__class__(self.data + other)
        else:
            return self.__class__(self.data + list(other))
    def __radd__(self, other):
        if isinstance(other, UserList):
            return self.__class__(other.data + self.data)
        elif isinstance(other, type(self.data)):
            return self.__class__(other + self.data)
        else:
            return self.__class__(list(other) + self.data)
    def __iadd__(self, other):
        if isinstance(other, UserList):
            self.data += other.data
        elif isinstance(other, type(self.data)):
            self.data += other
        else:
            self.data += list(other)
        return self
    def __mul__(self, n):
        return self.__class__(self.data*n)
    __rmul__ = __mul__
    def __imul__(self, n):
        self.data *= n
        return self
    def append(self, item): self.data.append(item)
    def insert(self, i, item): self.data.insert(i, item)
    def pop(self, i=-1): return self.data.pop(i)
    def remove(self, item): self.data.remove(item)
    def count(self, item): return self.data.count(item)
    def index(self, item, *args): return self.data.index(item, *args)
    def reverse(self): self.data.reverse()
    def sort(self, *args, **kwds): self.data.sort(*args, **kwds)
    def extend(self, other):
        if isinstance(other, UserList):
            self.data.extend(other.data)
        else:
            self.data.extend(other)
�
{fc@s�dZddlZddlZddlZddlmZyddlmZWne	k
rgdZnXdgZejej
BejBZd�Ze�\ZZZd�Zdd�Zied	6ed
6ed6Zejde�Zid
d6dd6dd6dd6dd6dd6dd6dd6ZdZd�Zdeeejd�ZepSeZejd e�Zd!Z eje d"�Z!eje d#�Z"de#fd$��YZ$dS(%sImplementation of JSONDecoder
i����N(tscanner(t
scanstringtJSONDecodercCs8tjdd�\}tjdd�\}|||fS(Ns>ds�s�(tstructtunpack(tnantinf((s$/usr/lib64/python2.7/json/decoder.pyt_floatconstantsscCsU|jdd|�d}|dkr2|d}n||jdd|�}||fS(Ns
ii(tcounttrindex(tdoctpostlinenotcolno((s$/usr/lib64/python2.7/json/decoder.pytlinecols

c	Cswt||�\}}|dkr=d}|j||||�St||�\}}d}|j|||||||�S(Ns#{0}: line {1} column {2} (char {3})s?{0}: line {1} column {2} - line {3} column {4} (char {5} - {6})(RtNonetformat(	tmsgR
RtendRR
tfmtt	endlinenotendcolno((s$/usr/lib64/python2.7/json/decoder.pyterrmsg"ss	-InfinitytInfinitytNaNs(.*?)(["\\\x00-\x1f])u"t"u\s\u/t/utbutfu
tnu
tru	ttsutf-8cCs�||d|d!}t|�dkr_|ddkr_yt|d�SWq_tk
r[q_Xnd}tt|||���dS(NiiitxXisInvalid \uXXXX escape(tlentintt
ValueErrorR(tsRtescR((s$/usr/lib64/python2.7/json/decoder.pyt
_decode_uXXXX?s"
cCs�|dkrt}ng}|j}|d}xO|||�}	|	dkrgttd||���n|	j�}|	j�\}
}|
r�t|
t�s�t|
|�}
n||
�n|dkr�PnL|dkr|rdj	|�}tt|||���q||�q1ny||}
Wn)t
k
rNttd||���nX|
dkr�y||
}Wn9tk
r�dt|
�}tt|||���nX|d7}n�t
||�}|d7}tjd	krfd
|ko�dknrf|||d!d
krft
||d�}d|ko7dknrfd|d
d>|dB}|d7}qfnt|�}||�q1Wdj|�|fS(s�Scan the string s for a JSON string. End is the index of the
    character in s after the quote that started the JSON string.
    Unescapes all valid JSON string escape sequences and raises ValueError
    on attempt to decode an invalid string. If strict is False then literal
    control characters are allowed in the string.

    Returns a tuple of the decoded string and the index of the character in s
    after the end quote.isUnterminated string starting atRs\s"Invalid control character {0!r} attusInvalid \escape: ii��i�i��is\ui�i��ii
iuN(RtDEFAULT_ENCODINGtappendR#RRtgroupst
isinstancetunicodeRt
IndexErrortKeyErrortreprR&tsyst
maxunicodetunichrtjoin(R$Rtencodingtstrictt_bt_mtchunkst_appendtbegintchunktcontentt
terminatorRR%tchartunituni2((s$/usr/lib64/python2.7/json/decoder.pyt
py_scanstringIs^
		






3s
[ \t\n\r]*s 	

cCs�|\}}	g}
|
j}||	|	d!}|dkr�||krm|||	�j�}	||	|	d!}n|dkr�|dk	r�||
�}
|
|	dfSi}
|dk	r�||
�}
n|
|	dfS|dkr�ttd||	���q�n|	d7}	x�tr�t||	||�\}}	||	|	d!dkr�|||	�j�}	||	|	d!dkr�ttd||	���q�n|	d7}	yM||	|kr�|	d7}	||	|kr�|||	d�j�}	q�nWntk
r�nXy|||	�\}}	Wn)tk
r6ttd||	���nX|||f�y@||	}||kr�|||	d�j�}	||	}nWntk
r�d}nX|	d7}	|dkr�Pn+|d	kr�ttd
||	d���nyc||	}||krH|	d7}	||	}||krH|||	d�j�}	||	}qHnWntk
rbd}nX|	d7}	|dkrttd||	d���qqW|dk	r�||
�}
|
|	fSt	|
�}
|dk	r�||
�}
n|
|	fS(NiRt}s1Expecting property name enclosed in double quotest:sExpecting ':' delimitersExpecting objecttt,sExpecting ',' delimiter(
R)RRR#RtTrueRR-t
StopIterationtdict(t	s_and_endR4R5t	scan_oncetobject_hooktobject_pairs_hookt_wt_wsR$Rtpairstpairs_appendtnextchartresulttkeytvalue((s$/usr/lib64/python2.7/json/decoder.pyt
JSONObject�s�	
	

#












c
Cs�|\}}g}|||d!}||kr\|||d�j�}|||d!}n|dkrv||dfS|j}xEtr�y|||�\}	}Wn)tk
r�ttd||���nX||	�|||d!}||kr!|||d�j�}|||d!}n|d7}|dkr;Pn'|dkrbttd||���nyM|||kr�|d7}|||kr�|||d�j�}q�nWq�tk
r�q�Xq�W||fS(Nit]sExpecting objectREsExpecting ',' delimiter(RR)RFRGR#RR-(
RIRJRMRNR$RtvaluesRQR9RT((s$/usr/lib64/python2.7/json/decoder.pyt	JSONArray�s@		



#
cBsGeZdZdddddedd�Zejd�Zdd�Z	RS(sSimple JSON <http://json.org> decoder

    Performs the following translations in decoding by default:

    +---------------+-------------------+
    | JSON          | Python            |
    +===============+===================+
    | object        | dict              |
    +---------------+-------------------+
    | array         | list              |
    +---------------+-------------------+
    | string        | unicode           |
    +---------------+-------------------+
    | number (int)  | int, long         |
    +---------------+-------------------+
    | number (real) | float             |
    +---------------+-------------------+
    | true          | True              |
    +---------------+-------------------+
    | false         | False             |
    +---------------+-------------------+
    | null          | None              |
    +---------------+-------------------+

    It also understands ``NaN``, ``Infinity``, and ``-Infinity`` as
    their corresponding ``float`` values, which is outside the JSON spec.

    cCs�||_||_||_|p$t|_|p3t|_|pEtj|_	||_
t|_t
|_t|_tj|�|_dS(s�``encoding`` determines the encoding used to interpret any ``str``
        objects decoded by this instance (utf-8 by default).  It has no
        effect when decoding ``unicode`` objects.

        Note that currently only encodings that are a superset of ASCII work,
        strings of other encodings should be passed in as ``unicode``.

        ``object_hook``, if specified, will be called with the result
        of every JSON object decoded and its return value will be used in
        place of the given ``dict``.  This can be used to provide custom
        deserializations (e.g. to support JSON-RPC class hinting).

        ``object_pairs_hook``, if specified will be called with the result of
        every JSON object decoded with an ordered list of pairs.  The return
        value of ``object_pairs_hook`` will be used instead of the ``dict``.
        This feature can be used to implement custom decoders that rely on the
        order that the key and value pairs are decoded (for example,
        collections.OrderedDict will remember the order of insertion). If
        ``object_hook`` is also defined, the ``object_pairs_hook`` takes
        priority.

        ``parse_float``, if specified, will be called with the string
        of every JSON float to be decoded. By default this is equivalent to
        float(num_str). This can be used to use another datatype or parser
        for JSON floats (e.g. decimal.Decimal).

        ``parse_int``, if specified, will be called with the string
        of every JSON int to be decoded. By default this is equivalent to
        int(num_str). This can be used to use another datatype or parser
        for JSON integers (e.g. float).

        ``parse_constant``, if specified, will be called with one of the
        following strings: -Infinity, Infinity, NaN.
        This can be used to raise an exception if invalid JSON numbers
        are encountered.

        If ``strict`` is false (true is the default), then control
        characters will be allowed inside strings.  Control characters in
        this context are those with character codes in the 0-31 range,
        including ``'\t'`` (tab), ``'\n'``, ``'\r'`` and ``'\0'``.

        N(R4RKRLtfloattparse_floatR"t	parse_intt
_CONSTANTSt__getitem__tparse_constantR5RUtparse_objectRXtparse_arrayRtparse_stringRtmake_scannerRJ(tselfR4RKRZR[R^R5RL((s$/usr/lib64/python2.7/json/decoder.pyt__init__.s-							cCsy|j|d||d�j��\}}|||�j�}|t|�kruttd||t|����n|S(szReturn the Python representation of ``s`` (a ``str`` or ``unicode``
        instance containing a JSON document)

        tidxis
Extra data(t
raw_decodeRR!R#R(RcR$RMtobjR((s$/usr/lib64/python2.7/json/decoder.pytdecodegs
*$icCsFy|j||�\}}Wntk
r;td��nX||fS(sLDecode a JSON document from ``s`` (a ``str`` or ``unicode``
        beginning with a JSON document) and return a 2-tuple of the Python
        representation and the index in ``s`` where the document ended.

        This can be used to decode a JSON document from a string that may
        have extraneous data at the end.

        sNo JSON object could be decoded(RJRGR#(RcR$ReRgR((s$/usr/lib64/python2.7/json/decoder.pyRfrs
	
N(
t__name__t
__module__t__doc__RRFRdt
WHITESPACEtmatchRhRf(((s$/usr/lib64/python2.7/json/decoder.pyRs		7(%RktreR0RtjsonRt_jsonRtc_scanstringtImportErrorRt__all__tVERBOSEt	MULTILINEtDOTALLtFLAGSRRtPosInftNegInfRRR\tcompiletSTRINGCHUNKt	BACKSLASHR(R&RFRmRARltWHITESPACE_STRRURXtobjectR(((s$/usr/lib64/python2.7/json/decoder.pyt<module>s@

				
&	
EW$r"""Command-line tool to validate and pretty-print JSON

Usage::

    $ echo '{"json":"obj"}' | python -m json.tool
    {
        "json": "obj"
    }
    $ echo '{ 1.2:3.4}' | python -m json.tool
    Expecting property name enclosed in double quotes: line 1 column 3 (char 2)

"""
import sys
import json

def main():
    if len(sys.argv) == 1:
        infile = sys.stdin
        outfile = sys.stdout
    elif len(sys.argv) == 2:
        infile = open(sys.argv[1], 'rb')
        outfile = sys.stdout
    elif len(sys.argv) == 3:
        infile = open(sys.argv[1], 'rb')
        outfile = open(sys.argv[2], 'wb')
    else:
        raise SystemExit(sys.argv[0] + " [infile [outfile]]")
    with infile:
        try:
            obj = json.load(infile)
        except ValueError, e:
            raise SystemExit(e)
    with outfile:
        json.dump(obj, outfile, sort_keys=True,
                  indent=4, separators=(',', ': '))
        outfile.write('\n')


if __name__ == '__main__':
    main()
�
{fc@sdZddlZyddlmZWnek
r?dZnXyddlmZWnek
rmdZnXej	d�Z
ej	d�Zej	d�Zidd	6d
d6dd
6dd6dd6dd6dd6Z
x3ed�D]%Ze
jee�dje��q�Wed�ZejZd�Zd�Zep8eZdefd��YZeeeeeee e!e"e#e$d�Z%dS(sImplementation of JSONEncoder
i����N(tencode_basestring_ascii(tmake_encoders[\x00-\x1f\\"\b\f\n\r\t]s([\\"]|[^\ -~])s[\x80-\xff]s\\s\s\"t"s\bss\fss\ns
s\rs
s\ts	i s	\u{0:04x}tinfcCs!d�}dtj||�dS(s5Return a JSON representation of a Python string

    cSst|jd�S(Ni(t
ESCAPE_DCTtgroup(tmatch((s$/usr/lib64/python2.7/json/encoder.pytreplace%sR(tESCAPEtsub(tsR((s$/usr/lib64/python2.7/json/encoder.pytencode_basestring!s	cCs]t|t�r6tj|�dk	r6|jd�}nd�}dttj||��dS(sAReturn an ASCII-only JSON representation of a Python string

    sutf-8cSs�|jd�}yt|SWnptk
r�t|�}|dkrPdj|�S|d8}d|d?d@B}d|d@B}dj||�SnXdS(	Niis	\u{0:04x}i�i
i�i�s\u{0:04x}\u{1:04x}(RRtKeyErrortordtformat(RR
tnts1ts2((s$/usr/lib64/python2.7/json/encoder.pyR0s


RN(t
isinstancetstrtHAS_UTF8tsearchtNonetdecodetESCAPE_ASCIIR	(R
R((s$/usr/lib64/python2.7/json/encoder.pytpy_encode_basestring_ascii*s$	tJSONEncoderc
Bs\eZdZdZdZeeeeeddddd�	Zd�Z	d�Z
ed�ZRS(	sZExtensible JSON <http://json.org> encoder for Python data structures.

    Supports the following objects and types by default:

    +-------------------+---------------+
    | Python            | JSON          |
    +===================+===============+
    | dict              | object        |
    +-------------------+---------------+
    | list, tuple       | array         |
    +-------------------+---------------+
    | str, unicode      | string        |
    +-------------------+---------------+
    | int, long, float  | number        |
    +-------------------+---------------+
    | True              | true          |
    +-------------------+---------------+
    | False             | false         |
    +-------------------+---------------+
    | None              | null          |
    +-------------------+---------------+

    To extend this to recognize other objects, subclass and implement a
    ``.default()`` method with another method that returns a serializable
    object for ``o`` if possible, otherwise it should call the superclass
    implementation (to raise ``TypeError``).

    s, s: sutf-8c

Cs|||_||_||_||_||_||_|dk	rW|\|_|_n|	dk	ro|	|_	n||_
dS(s�	Constructor for JSONEncoder, with sensible defaults.

        If skipkeys is false, then it is a TypeError to attempt
        encoding of keys that are not str, int, long, float or None.  If
        skipkeys is True, such items are simply skipped.

        If *ensure_ascii* is true (the default), all non-ASCII
        characters in the output are escaped with \uXXXX sequences,
        and the results are str instances consisting of ASCII
        characters only.  If ensure_ascii is False, a result may be a
        unicode instance.  This usually happens if the input contains
        unicode strings or the *encoding* parameter is used.

        If check_circular is true, then lists, dicts, and custom encoded
        objects will be checked for circular references during encoding to
        prevent an infinite recursion (which would cause an OverflowError).
        Otherwise, no such check takes place.

        If allow_nan is true, then NaN, Infinity, and -Infinity will be
        encoded as such.  This behavior is not JSON specification compliant,
        but is consistent with most JavaScript based encoders and decoders.
        Otherwise, it will be a ValueError to encode such floats.

        If sort_keys is true, then the output of dictionaries will be
        sorted by key; this is useful for regression tests to ensure
        that JSON serializations can be compared on a day-to-day basis.

        If indent is a non-negative integer, then JSON array
        elements and object members will be pretty-printed with that
        indent level.  An indent level of 0 will only insert newlines.
        None is the most compact representation.  Since the default
        item separator is ', ',  the output might include trailing
        whitespace when indent is specified.  You can use
        separators=(',', ': ') to avoid this.

        If specified, separators should be a (item_separator, key_separator)
        tuple.  The default is (', ', ': ').  To get the most compact JSON
        representation you should specify (',', ':') to eliminate whitespace.

        If specified, default is a function that gets called for objects
        that can't otherwise be serialized.  It should return a JSON encodable
        version of the object or raise a ``TypeError``.

        If encoding is not None, then all input strings will be
        transformed into unicode using that encoding prior to JSON-encoding.
        The default is UTF-8.

        N(tskipkeystensure_asciitcheck_circulart	allow_nant	sort_keystindentRtitem_separatort
key_separatortdefaulttencoding(
tselfRRRRRR t
separatorsR$R#((s$/usr/lib64/python2.7/json/encoder.pyt__init__es4						cCstt|�d��dS(slImplement this method in a subclass such that it returns
        a serializable object for ``o``, or calls the base implementation
        (to raise a ``TypeError``).

        For example, to support arbitrary iterators, you could
        implement default like this::

            def default(self, o):
                try:
                    iterable = iter(o)
                except TypeError:
                    pass
                else:
                    return list(iterable)
                # Let the base class default method raise the TypeError
                return JSONEncoder.default(self, o)

        s is not JSON serializableN(t	TypeErrortrepr(R%to((s$/usr/lib64/python2.7/json/encoder.pyR#�scCs�t|t�rut|t�rU|j}|dk	rU|dkrU|j|�}qUn|jrht|�St|�Sn|j	|dt
�}t|ttf�s�t|�}ndj
|�S(s�Return a JSON string representation of a Python data structure.

        >>> JSONEncoder().encode({"foo": ["bar", "baz"]})
        '{"foo": ["bar", "baz"]}'

        sutf-8t	_one_shottN(Rt
basestringRR$RRRRRt
iterencodetTruetlistttupletjoin(R%R*t	_encodingtchunks((s$/usr/lib64/python2.7/json/encoder.pytencode�s	
	

cCs|jri}nd}|jr*t}nt}|jdkrT||jd�}n|jtttd�}|r�t	dk	r�|j
dkr�|jr�t	||j||j
|j
|j|j|j|j�	}n9t||j||j
||j
|j|j|j|�
}||d�S(s�Encode the given object and yield each string
        representation as available.

        For example::

            for chunk in JSONEncoder().iterencode(bigobject):
                mysocket.write(chunk)

        sutf-8cSs+t|t�r!|j|�}n||�S(N(RRR(R*t
_orig_encoderR3((s$/usr/lib64/python2.7/json/encoder.pyt_encoder�scSsl||krd}n4||kr*d}n||kr?d}n
||�S|shtdt|���n|S(NtNaNtInfinitys	-Infinitys2Out of range float values are not JSON compliant: (t
ValueErrorR)(R*Rt_reprt_inft_neginfttext((s$/usr/lib64/python2.7/json/encoder.pytfloatstr�s			
iN(RRRRRR$Rt
FLOAT_REPRtINFINITYtc_make_encoderR RR#R"R!Rt_make_iterencode(R%R*R+tmarkersR7R?t_iterencode((s$/usr/lib64/python2.7/json/encoder.pyR.�s*
				N(t__name__t
__module__t__doc__R!R"tFalseR/RR'R#R5R.(((s$/usr/lib64/python2.7/json/encoder.pyRFs	>		cs�����������
���������fd�����������	�
���
���������fd�����������
���������fd���S(Nc
3s8|sdVdS�dk	rO�|�}|�krB�d��n|�|<nd}�dk	r�|d7}dd�|}�|}||7}nd}�}t}xF|D]>}|r�t}n|}�
|��r�|�|�Vq�|dkr|dVq�|tkr|dVq�|tkr1|d	Vq��
|��f�rX|�|�Vq��
|�
�ry|�|�Vq�|V�
|��f�r��||�}n0�
|�	�r��||�}n�||�}x|D]}	|	Vq�Wq�W|dk	r|d8}dd�|Vnd
V�dk	r4�|=ndS(Ns[]sCircular reference detectedt[is
t tnullttruetfalset](RR/RI(
tlstt_current_indent_leveltmarkeridtbuftnewline_indentt	separatortfirsttvalueR4tchunk(R:R7t	_floatstrt_indentt_item_separatorREt_iterencode_dictt_iterencode_listR-tdicttfloattidtintRR0tlongRDRR1(s$/usr/lib64/python2.7/json/encoder.pyR] s^




	


c3s|sdVdS�dk	rO�|�}|�krB�d��n|�|<ndV�dk	r�|d7}dd�|}�|}|Vnd}�}t}�
r�t|j�dd��}n|j�}x�|D]�\}}�|��r�n��|�
�r�|�}n�|tkr(d	}nt|tkr=d
}n_|dkrRd}nJ�|��f�rv�|�}n&�	r�q�ntdt|�d
��|r�t}n|V�|�V�V�|��r��|�Vq�|dkr�dVq�|tkrd	Vq�|tkrd
Vq��|��f�r<�|�Vq��|�
�rY�|�Vq��|��f�r��||�}	n0�|��r��||�}	n�||�}	x|	D]}
|
Vq�Wq�W|dk	r�|d8}dd�|VndV�dk	r�|=ndS(Ns{}sCircular reference detectedt{is
RKtkeycSs|dS(Ni((tkv((s$/usr/lib64/python2.7/json/encoder.pyt<lambda>iR,RMRNRLskey s is not a stringt}(RR/tsortedtitemst	iteritemsRIR(R)(tdctRQRRRTR!RVRiRdRWR4RX(R:R7RYRZR[RER\R]t_key_separatort	_skipkeyst
_sort_keysR-R^R_R`RaRR0RbRDRR1(s$/usr/lib64/python2.7/json/encoder.pyR\Us�


				


c3s��|��r�|�Vne|dkr1dVnQ|tkrEdVn=|tkrYdVn)�|��f�r|�|�Vn�|�	�r��|�Vn��|�
�f�r�x��||�D]}|Vq�Wn��|��rx��||�D]}|Vq�Wn��dk	rA�
|�}|�kr4�d��n|�|<n�|�}x�||�D]}|Vq]W�dk	r��|=ndS(NRLRMRNsCircular reference detected(RR/RI(R*RQRXRR(R:t_defaultR7RYRER\R]R-R^R_R`RaRR0RbRDRR1(s$/usr/lib64/python2.7/json/encoder.pyRE�s8
	((RDRoR7RZRYRlR[RnRmR+R:R-R^R_R`RaRR0RbRR1((R:RoR7RYRZR[RER\R]RlRmRnR-R^R_R`RaRR0RbRDRR1s$/usr/lib64/python2.7/json/encoder.pyRCsE5NLB(&RHtret_jsonRtc_encode_basestring_asciitImportErrorRRRBtcompileRRRRtrangetit
setdefaulttchrRR_RAt__repr__R@RRtobjectRR:R-R^R`RaRR0RbRR1RC(((s$/usr/lib64/python2.7/json/encoder.pyt<module>sN




#				��
{fc@sAdZddlZddlZd�Zedkr=e�ndS(sCommand-line tool to validate and pretty-print JSON

Usage::

    $ echo '{"json":"obj"}' | python -m json.tool
    {
        "json": "obj"
    }
    $ echo '{ 1.2:3.4}' | python -m json.tool
    Expecting property name enclosed in double quotes: line 1 column 3 (char 2)

i����NcCs>ttj�dkr*tj}tj}n�ttj�dkrattjdd�}tj}n[ttj�dkr�ttjdd�}ttjdd�}nttjdd��|�:ytj|�}Wnt	k
r�}t|��nXWdQX|�4tj
||dtd	d
dd�|jd�WdQXdS(Niitrbitwbis [infile [outfile]]t	sort_keystindentit
separatorst,s: s
(Rs: (
tlentsystargvtstdintstdouttopent
SystemExittjsontloadt
ValueErrortdumptTruetwrite(tinfiletoutfiletobjte((s!/usr/lib64/python2.7/json/tool.pytmains&	
t__main__(t__doc__RR
Rt__name__(((s!/usr/lib64/python2.7/json/tool.pyt<module>s
	�
{fc@s�dZddlZyddlmZWnek
r?dZnXdgZejdej	ej
BejB�Zd�Z
ep~e
ZdS(sJSON token scanner
i����N(tmake_scannerRs)(-?(?:0|[1-9]\d*))(\.\d+)?([eE][-+]?\d+)?cs�|j�	|j�|j�
tj�|j�|j�|j�|j�|j	�|j
�|j�����������	�
�fd���S(NcsZy||}Wntk
r't�nX|dkrK�
||d���S|dkrz�	||df������S|dkr��||df��S|dkr�|||d!dkr�d|dfS|dkr�|||d!d	kr�t|dfS|d
kr0|||d!dkr0t|dfS�||�}|dk	r�|j�\}}}|sl|r��||p{d
|p�d
�}n�|�}||j�fS|dkr�|||d!dkr��d�|dfS|dkr|||d!dkr�d�|dfS|dkrP|||d!dkrP�d�|dfSt�dS(Nt"it{t[tnitnullttttruetfitfalsettNitNaNtIitInfinityt-i	s	-Infinity(t
IndexErrort
StopIterationtNonetTruetFalsetgroupstend(tstringtidxtnextchartmtintegertfractexptres(t
_scan_oncetencodingtmatch_numbertobject_hooktobject_pairs_hooktparse_arraytparse_constanttparse_floatt	parse_inttparse_objecttparse_stringtstrict(s$/usr/lib64/python2.7/json/scanner.pyRs>


#######(R(R$R)t	NUMBER_REtmatchR R*R&R'R%R"R#(tcontext((RR R!R"R#R$R%R&R'R(R)R*s$/usr/lib64/python2.7/json/scanner.pytpy_make_scanners											0%(t__doc__tret_jsonRtc_make_scannertImportErrorRt__all__tcompiletVERBOSEt	MULTILINEtDOTALLR+R.(((s$/usr/lib64/python2.7/json/scanner.pyt<module>s

		4"""Implementation of JSONEncoder
"""
import re

try:
    from _json import encode_basestring_ascii as c_encode_basestring_ascii
except ImportError:
    c_encode_basestring_ascii = None
try:
    from _json import make_encoder as c_make_encoder
except ImportError:
    c_make_encoder = None

ESCAPE = re.compile(r'[\x00-\x1f\\"\b\f\n\r\t]')
ESCAPE_ASCII = re.compile(r'([\\"]|[^\ -~])')
HAS_UTF8 = re.compile(r'[\x80-\xff]')
ESCAPE_DCT = {
    '\\': '\\\\',
    '"': '\\"',
    '\b': '\\b',
    '\f': '\\f',
    '\n': '\\n',
    '\r': '\\r',
    '\t': '\\t',
}
for i in range(0x20):
    ESCAPE_DCT.setdefault(chr(i), '\\u{0:04x}'.format(i))
    #ESCAPE_DCT.setdefault(chr(i), '\\u%04x' % (i,))

INFINITY = float('inf')
FLOAT_REPR = float.__repr__

def encode_basestring(s):
    """Return a JSON representation of a Python string

    """
    def replace(match):
        return ESCAPE_DCT[match.group(0)]
    return '"' + ESCAPE.sub(replace, s) + '"'


def py_encode_basestring_ascii(s):
    """Return an ASCII-only JSON representation of a Python string

    """
    if isinstance(s, str) and HAS_UTF8.search(s) is not None:
        s = s.decode('utf-8')
    def replace(match):
        s = match.group(0)
        try:
            return ESCAPE_DCT[s]
        except KeyError:
            n = ord(s)
            if n < 0x10000:
                return '\\u{0:04x}'.format(n)
                #return '\\u%04x' % (n,)
            else:
                # surrogate pair
                n -= 0x10000
                s1 = 0xd800 | ((n >> 10) & 0x3ff)
                s2 = 0xdc00 | (n & 0x3ff)
                return '\\u{0:04x}\\u{1:04x}'.format(s1, s2)
                #return '\\u%04x\\u%04x' % (s1, s2)
    return '"' + str(ESCAPE_ASCII.sub(replace, s)) + '"'


encode_basestring_ascii = (
    c_encode_basestring_ascii or py_encode_basestring_ascii)

class JSONEncoder(object):
    """Extensible JSON <http://json.org> encoder for Python data structures.

    Supports the following objects and types by default:

    +-------------------+---------------+
    | Python            | JSON          |
    +===================+===============+
    | dict              | object        |
    +-------------------+---------------+
    | list, tuple       | array         |
    +-------------------+---------------+
    | str, unicode      | string        |
    +-------------------+---------------+
    | int, long, float  | number        |
    +-------------------+---------------+
    | True              | true          |
    +-------------------+---------------+
    | False             | false         |
    +-------------------+---------------+
    | None              | null          |
    +-------------------+---------------+

    To extend this to recognize other objects, subclass and implement a
    ``.default()`` method with another method that returns a serializable
    object for ``o`` if possible, otherwise it should call the superclass
    implementation (to raise ``TypeError``).

    """
    item_separator = ', '
    key_separator = ': '
    def __init__(self, skipkeys=False, ensure_ascii=True,
            check_circular=True, allow_nan=True, sort_keys=False,
            indent=None, separators=None, encoding='utf-8', default=None):
        """Constructor for JSONEncoder, with sensible defaults.

        If skipkeys is false, then it is a TypeError to attempt
        encoding of keys that are not str, int, long, float or None.  If
        skipkeys is True, such items are simply skipped.

        If *ensure_ascii* is true (the default), all non-ASCII
        characters in the output are escaped with \uXXXX sequences,
        and the results are str instances consisting of ASCII
        characters only.  If ensure_ascii is False, a result may be a
        unicode instance.  This usually happens if the input contains
        unicode strings or the *encoding* parameter is used.

        If check_circular is true, then lists, dicts, and custom encoded
        objects will be checked for circular references during encoding to
        prevent an infinite recursion (which would cause an OverflowError).
        Otherwise, no such check takes place.

        If allow_nan is true, then NaN, Infinity, and -Infinity will be
        encoded as such.  This behavior is not JSON specification compliant,
        but is consistent with most JavaScript based encoders and decoders.
        Otherwise, it will be a ValueError to encode such floats.

        If sort_keys is true, then the output of dictionaries will be
        sorted by key; this is useful for regression tests to ensure
        that JSON serializations can be compared on a day-to-day basis.

        If indent is a non-negative integer, then JSON array
        elements and object members will be pretty-printed with that
        indent level.  An indent level of 0 will only insert newlines.
        None is the most compact representation.  Since the default
        item separator is ', ',  the output might include trailing
        whitespace when indent is specified.  You can use
        separators=(',', ': ') to avoid this.

        If specified, separators should be a (item_separator, key_separator)
        tuple.  The default is (', ', ': ').  To get the most compact JSON
        representation you should specify (',', ':') to eliminate whitespace.

        If specified, default is a function that gets called for objects
        that can't otherwise be serialized.  It should return a JSON encodable
        version of the object or raise a ``TypeError``.

        If encoding is not None, then all input strings will be
        transformed into unicode using that encoding prior to JSON-encoding.
        The default is UTF-8.

        """

        self.skipkeys = skipkeys
        self.ensure_ascii = ensure_ascii
        self.check_circular = check_circular
        self.allow_nan = allow_nan
        self.sort_keys = sort_keys
        self.indent = indent
        if separators is not None:
            self.item_separator, self.key_separator = separators
        if default is not None:
            self.default = default
        self.encoding = encoding

    def default(self, o):
        """Implement this method in a subclass such that it returns
        a serializable object for ``o``, or calls the base implementation
        (to raise a ``TypeError``).

        For example, to support arbitrary iterators, you could
        implement default like this::

            def default(self, o):
                try:
                    iterable = iter(o)
                except TypeError:
                    pass
                else:
                    return list(iterable)
                # Let the base class default method raise the TypeError
                return JSONEncoder.default(self, o)

        """
        raise TypeError(repr(o) + " is not JSON serializable")

    def encode(self, o):
        """Return a JSON string representation of a Python data structure.

        >>> JSONEncoder().encode({"foo": ["bar", "baz"]})
        '{"foo": ["bar", "baz"]}'

        """
        # This is for extremely simple cases and benchmarks.
        if isinstance(o, basestring):
            if isinstance(o, str):
                _encoding = self.encoding
                if (_encoding is not None
                        and not (_encoding == 'utf-8')):
                    o = o.decode(_encoding)
            if self.ensure_ascii:
                return encode_basestring_ascii(o)
            else:
                return encode_basestring(o)
        # This doesn't pass the iterator directly to ''.join() because the
        # exceptions aren't as detailed.  The list call should be roughly
        # equivalent to the PySequence_Fast that ''.join() would do.
        chunks = self.iterencode(o, _one_shot=True)
        if not isinstance(chunks, (list, tuple)):
            chunks = list(chunks)
        return ''.join(chunks)

    def iterencode(self, o, _one_shot=False):
        """Encode the given object and yield each string
        representation as available.

        For example::

            for chunk in JSONEncoder().iterencode(bigobject):
                mysocket.write(chunk)

        """
        if self.check_circular:
            markers = {}
        else:
            markers = None
        if self.ensure_ascii:
            _encoder = encode_basestring_ascii
        else:
            _encoder = encode_basestring
        if self.encoding != 'utf-8':
            def _encoder(o, _orig_encoder=_encoder, _encoding=self.encoding):
                if isinstance(o, str):
                    o = o.decode(_encoding)
                return _orig_encoder(o)

        def floatstr(o, allow_nan=self.allow_nan,
                _repr=FLOAT_REPR, _inf=INFINITY, _neginf=-INFINITY):
            # Check for specials.  Note that this type of test is processor
            # and/or platform-specific, so do tests which don't depend on the
            # internals.

            if o != o:
                text = 'NaN'
            elif o == _inf:
                text = 'Infinity'
            elif o == _neginf:
                text = '-Infinity'
            else:
                return _repr(o)

            if not allow_nan:
                raise ValueError(
                    "Out of range float values are not JSON compliant: " +
                    repr(o))

            return text


        if (_one_shot and c_make_encoder is not None
                and self.indent is None and not self.sort_keys):
            _iterencode = c_make_encoder(
                markers, self.default, _encoder, self.indent,
                self.key_separator, self.item_separator, self.sort_keys,
                self.skipkeys, self.allow_nan)
        else:
            _iterencode = _make_iterencode(
                markers, self.default, _encoder, self.indent, floatstr,
                self.key_separator, self.item_separator, self.sort_keys,
                self.skipkeys, _one_shot)
        return _iterencode(o, 0)

def _make_iterencode(markers, _default, _encoder, _indent, _floatstr,
        _key_separator, _item_separator, _sort_keys, _skipkeys, _one_shot,
        ## HACK: hand-optimized bytecode; turn globals into locals
        ValueError=ValueError,
        basestring=basestring,
        dict=dict,
        float=float,
        id=id,
        int=int,
        isinstance=isinstance,
        list=list,
        long=long,
        str=str,
        tuple=tuple,
    ):

    def _iterencode_list(lst, _current_indent_level):
        if not lst:
            yield '[]'
            return
        if markers is not None:
            markerid = id(lst)
            if markerid in markers:
                raise ValueError("Circular reference detected")
            markers[markerid] = lst
        buf = '['
        if _indent is not None:
            _current_indent_level += 1
            newline_indent = '\n' + (' ' * (_indent * _current_indent_level))
            separator = _item_separator + newline_indent
            buf += newline_indent
        else:
            newline_indent = None
            separator = _item_separator
        first = True
        for value in lst:
            if first:
                first = False
            else:
                buf = separator
            if isinstance(value, basestring):
                yield buf + _encoder(value)
            elif value is None:
                yield buf + 'null'
            elif value is True:
                yield buf + 'true'
            elif value is False:
                yield buf + 'false'
            elif isinstance(value, (int, long)):
                yield buf + str(value)
            elif isinstance(value, float):
                yield buf + _floatstr(value)
            else:
                yield buf
                if isinstance(value, (list, tuple)):
                    chunks = _iterencode_list(value, _current_indent_level)
                elif isinstance(value, dict):
                    chunks = _iterencode_dict(value, _current_indent_level)
                else:
                    chunks = _iterencode(value, _current_indent_level)
                for chunk in chunks:
                    yield chunk
        if newline_indent is not None:
            _current_indent_level -= 1
            yield '\n' + (' ' * (_indent * _current_indent_level))
        yield ']'
        if markers is not None:
            del markers[markerid]

    def _iterencode_dict(dct, _current_indent_level):
        if not dct:
            yield '{}'
            return
        if markers is not None:
            markerid = id(dct)
            if markerid in markers:
                raise ValueError("Circular reference detected")
            markers[markerid] = dct
        yield '{'
        if _indent is not None:
            _current_indent_level += 1
            newline_indent = '\n' + (' ' * (_indent * _current_indent_level))
            item_separator = _item_separator + newline_indent
            yield newline_indent
        else:
            newline_indent = None
            item_separator = _item_separator
        first = True
        if _sort_keys:
            items = sorted(dct.items(), key=lambda kv: kv[0])
        else:
            items = dct.iteritems()
        for key, value in items:
            if isinstance(key, basestring):
                pass
            # JavaScript is weakly typed for these, so it makes sense to
            # also allow them.  Many encoders seem to do something like this.
            elif isinstance(key, float):
                key = _floatstr(key)
            elif key is True:
                key = 'true'
            elif key is False:
                key = 'false'
            elif key is None:
                key = 'null'
            elif isinstance(key, (int, long)):
                key = str(key)
            elif _skipkeys:
                continue
            else:
                raise TypeError("key " + repr(key) + " is not a string")
            if first:
                first = False
            else:
                yield item_separator
            yield _encoder(key)
            yield _key_separator
            if isinstance(value, basestring):
                yield _encoder(value)
            elif value is None:
                yield 'null'
            elif value is True:
                yield 'true'
            elif value is False:
                yield 'false'
            elif isinstance(value, (int, long)):
                yield str(value)
            elif isinstance(value, float):
                yield _floatstr(value)
            else:
                if isinstance(value, (list, tuple)):
                    chunks = _iterencode_list(value, _current_indent_level)
                elif isinstance(value, dict):
                    chunks = _iterencode_dict(value, _current_indent_level)
                else:
                    chunks = _iterencode(value, _current_indent_level)
                for chunk in chunks:
                    yield chunk
        if newline_indent is not None:
            _current_indent_level -= 1
            yield '\n' + (' ' * (_indent * _current_indent_level))
        yield '}'
        if markers is not None:
            del markers[markerid]

    def _iterencode(o, _current_indent_level):
        if isinstance(o, basestring):
            yield _encoder(o)
        elif o is None:
            yield 'null'
        elif o is True:
            yield 'true'
        elif o is False:
            yield 'false'
        elif isinstance(o, (int, long)):
            yield str(o)
        elif isinstance(o, float):
            yield _floatstr(o)
        elif isinstance(o, (list, tuple)):
            for chunk in _iterencode_list(o, _current_indent_level):
                yield chunk
        elif isinstance(o, dict):
            for chunk in _iterencode_dict(o, _current_indent_level):
                yield chunk
        else:
            if markers is not None:
                markerid = id(o)
                if markerid in markers:
                    raise ValueError("Circular reference detected")
                markers[markerid] = o
            o = _default(o)
            for chunk in _iterencode(o, _current_indent_level):
                yield chunk
            if markers is not None:
                del markers[markerid]

    return _iterencode
�
{fc@s�dZddlZyddlmZWnek
r?dZnXdgZejdej	ej
BejB�Zd�Z
ep~e
ZdS(sJSON token scanner
i����N(tmake_scannerRs)(-?(?:0|[1-9]\d*))(\.\d+)?([eE][-+]?\d+)?cs�|j�	|j�|j�
tj�|j�|j�|j�|j�|j	�|j
�|j�����������	�
�fd���S(NcsZy||}Wntk
r't�nX|dkrK�
||d���S|dkrz�	||df������S|dkr��||df��S|dkr�|||d!dkr�d|dfS|dkr�|||d!d	kr�t|dfS|d
kr0|||d!dkr0t|dfS�||�}|dk	r�|j�\}}}|sl|r��||p{d
|p�d
�}n�|�}||j�fS|dkr�|||d!dkr��d�|dfS|dkr|||d!dkr�d�|dfS|dkrP|||d!dkrP�d�|dfSt�dS(Nt"it{t[tnitnullttttruetfitfalsettNitNaNtIitInfinityt-i	s	-Infinity(t
IndexErrort
StopIterationtNonetTruetFalsetgroupstend(tstringtidxtnextchartmtintegertfractexptres(t
_scan_oncetencodingtmatch_numbertobject_hooktobject_pairs_hooktparse_arraytparse_constanttparse_floatt	parse_inttparse_objecttparse_stringtstrict(s$/usr/lib64/python2.7/json/scanner.pyRs>


#######(R(R$R)t	NUMBER_REtmatchR R*R&R'R%R"R#(tcontext((RR R!R"R#R$R%R&R'R(R)R*s$/usr/lib64/python2.7/json/scanner.pytpy_make_scanners											0%(t__doc__tret_jsonRtc_make_scannertImportErrorRt__all__tcompiletVERBOSEt	MULTILINEtDOTALLR+R.(((s$/usr/lib64/python2.7/json/scanner.pyt<module>s

		4�
{fc@s,dZdZddddddgZdZd	d
lmZd	dlmZeded
e	de	de	dddddddd�Zee	e	e	ddddded�
Zee	e	e	ddddded�
Z
edddddd�Zdddddddd�Zdddddddd�ZdS(s�JSON (JavaScript Object Notation) <http://json.org> is a subset of
JavaScript syntax (ECMA-262 3rd edition) used as a lightweight data
interchange format.

:mod:`json` exposes an API familiar to users of the standard library
:mod:`marshal` and :mod:`pickle` modules. It is the externally maintained
version of the :mod:`json` library contained in Python 2.6, but maintains
compatibility with Python 2.4 and Python 2.5 and (currently) has
significant performance advantages, even without using the optional C
extension for speedups.

Encoding basic Python object hierarchies::

    >>> import json
    >>> json.dumps(['foo', {'bar': ('baz', None, 1.0, 2)}])
    '["foo", {"bar": ["baz", null, 1.0, 2]}]'
    >>> print json.dumps("\"foo\bar")
    "\"foo\bar"
    >>> print json.dumps(u'\u1234')
    "\u1234"
    >>> print json.dumps('\\')
    "\\"
    >>> print json.dumps({"c": 0, "b": 0, "a": 0}, sort_keys=True)
    {"a": 0, "b": 0, "c": 0}
    >>> from StringIO import StringIO
    >>> io = StringIO()
    >>> json.dump(['streaming API'], io)
    >>> io.getvalue()
    '["streaming API"]'

Compact encoding::

    >>> import json
    >>> json.dumps([1,2,3,{'4': 5, '6': 7}], sort_keys=True, separators=(',',':'))
    '[1,2,3,{"4":5,"6":7}]'

Pretty printing::

    >>> import json
    >>> print json.dumps({'4': 5, '6': 7}, sort_keys=True,
    ...                  indent=4, separators=(',', ': '))
    {
        "4": 5,
        "6": 7
    }

Decoding JSON::

    >>> import json
    >>> obj = [u'foo', {u'bar': [u'baz', None, 1.0, 2]}]
    >>> json.loads('["foo", {"bar":["baz", null, 1.0, 2]}]') == obj
    True
    >>> json.loads('"\\"foo\\bar"') == u'"foo\x08ar'
    True
    >>> from StringIO import StringIO
    >>> io = StringIO('["streaming API"]')
    >>> json.load(io)[0] == 'streaming API'
    True

Specializing JSON object decoding::

    >>> import json
    >>> def as_complex(dct):
    ...     if '__complex__' in dct:
    ...         return complex(dct['real'], dct['imag'])
    ...     return dct
    ...
    >>> json.loads('{"__complex__": true, "real": 1, "imag": 2}',
    ...     object_hook=as_complex)
    (1+2j)
    >>> from decimal import Decimal
    >>> json.loads('1.1', parse_float=Decimal) == Decimal('1.1')
    True

Specializing JSON object encoding::

    >>> import json
    >>> def encode_complex(obj):
    ...     if isinstance(obj, complex):
    ...         return [obj.real, obj.imag]
    ...     raise TypeError(repr(obj) + " is not JSON serializable")
    ...
    >>> json.dumps(2 + 1j, default=encode_complex)
    '[2.0, 1.0]'
    >>> json.JSONEncoder(default=encode_complex).encode(2 + 1j)
    '[2.0, 1.0]'
    >>> ''.join(json.JSONEncoder(default=encode_complex).iterencode(2 + 1j))
    '[2.0, 1.0]'


Using json.tool from the shell to validate and pretty-print::

    $ echo '{"json":"obj"}' | python -m json.tool
    {
        "json": "obj"
    }
    $ echo '{ 1.2:3.4}' | python -m json.tool
    Expecting property name enclosed in double quotes: line 1 column 3 (char 2)
s2.0.9tdumptdumpstloadtloadstJSONDecodertJSONEncodersBob Ippolito <bob@redivi.com>i(R(Rtskipkeystensure_asciitcheck_circulart	allow_nantindentt
separatorstencodingsutf-8tdefaultcKs�|ru|ru|ru|ru|dkru|dkru|dkru|	dkru|
dkru|ru|rutj|�}
n`|dkr�t}n|d|d|d|d|d|d|d|	d	|
d
||�	j|�}
x|
D]}|j|�q�WdS(s�	Serialize ``obj`` as a JSON formatted stream to ``fp`` (a
    ``.write()``-supporting file-like object).

    If ``skipkeys`` is true then ``dict`` keys that are not basic types
    (``str``, ``unicode``, ``int``, ``long``, ``float``, ``bool``, ``None``)
    will be skipped instead of raising a ``TypeError``.

    If ``ensure_ascii`` is true (the default), all non-ASCII characters in the
    output are escaped with ``\uXXXX`` sequences, and the result is a ``str``
    instance consisting of ASCII characters only.  If ``ensure_ascii`` is
    false, some chunks written to ``fp`` may be ``unicode`` instances.
    This usually happens because the input contains unicode strings or the
    ``encoding`` parameter is used. Unless ``fp.write()`` explicitly
    understands ``unicode`` (as in ``codecs.getwriter``) this is likely to
    cause an error.

    If ``check_circular`` is false, then the circular reference check
    for container types will be skipped and a circular reference will
    result in an ``OverflowError`` (or worse).

    If ``allow_nan`` is false, then it will be a ``ValueError`` to
    serialize out of range ``float`` values (``nan``, ``inf``, ``-inf``)
    in strict compliance of the JSON specification, instead of using the
    JavaScript equivalents (``NaN``, ``Infinity``, ``-Infinity``).

    If ``indent`` is a non-negative integer, then JSON array elements and
    object members will be pretty-printed with that indent level. An indent
    level of 0 will only insert newlines. ``None`` is the most compact
    representation.  Since the default item separator is ``', '``,  the
    output might include trailing whitespace when ``indent`` is specified.
    You can use ``separators=(',', ': ')`` to avoid this.

    If ``separators`` is an ``(item_separator, dict_separator)`` tuple
    then it will be used instead of the default ``(', ', ': ')`` separators.
    ``(',', ':')`` is the most compact JSON representation.

    ``encoding`` is the character encoding for str instances, default is UTF-8.

    ``default(obj)`` is a function that should return a serializable version
    of obj or raise TypeError. The default simply raises TypeError.

    If *sort_keys* is true (default: ``False``), then the output of
    dictionaries will be sorted by key.

    To use a custom ``JSONEncoder`` subclass (e.g. one that overrides the
    ``.default()`` method to serialize additional types), specify it with
    the ``cls`` kwarg; otherwise ``JSONEncoder`` is used.

    sutf-8RRRR	R
RRR
t	sort_keysN(tNonet_default_encodert
iterencodeRtwrite(tobjtfpRRRR	tclsR
RRR
Rtkwtiterabletchunk((s%/usr/lib64/python2.7/json/__init__.pyRzs5
$&	
cKs�|rp|rp|rp|rp|dkrp|dkrp|dkrp|dkrp|	dkrp|
rp|rptj|�S|dkr�t}n|d|d|d|d|d|d|d|d	|	d
|
|�	j|�S(sSerialize ``obj`` to a JSON formatted ``str``.

    If ``skipkeys`` is true then ``dict`` keys that are not basic types
    (``str``, ``unicode``, ``int``, ``long``, ``float``, ``bool``, ``None``)
    will be skipped instead of raising a ``TypeError``.


    If ``ensure_ascii`` is false, all non-ASCII characters are not escaped, and
    the return value may be a ``unicode`` instance. See ``dump`` for details.

    If ``check_circular`` is false, then the circular reference check
    for container types will be skipped and a circular reference will
    result in an ``OverflowError`` (or worse).

    If ``allow_nan`` is false, then it will be a ``ValueError`` to
    serialize out of range ``float`` values (``nan``, ``inf``, ``-inf``) in
    strict compliance of the JSON specification, instead of using the
    JavaScript equivalents (``NaN``, ``Infinity``, ``-Infinity``).

    If ``indent`` is a non-negative integer, then JSON array elements and
    object members will be pretty-printed with that indent level. An indent
    level of 0 will only insert newlines. ``None`` is the most compact
    representation.  Since the default item separator is ``', '``,  the
    output might include trailing whitespace when ``indent`` is specified.
    You can use ``separators=(',', ': ')`` to avoid this.

    If ``separators`` is an ``(item_separator, dict_separator)`` tuple
    then it will be used instead of the default ``(', ', ': ')`` separators.
    ``(',', ':')`` is the most compact JSON representation.

    ``encoding`` is the character encoding for str instances, default is UTF-8.

    ``default(obj)`` is a function that should return a serializable version
    of obj or raise TypeError. The default simply raises TypeError.

    If *sort_keys* is true (default: ``False``), then the output of
    dictionaries will be sorted by key.

    To use a custom ``JSONEncoder`` subclass (e.g. one that overrides the
    ``.default()`` method to serialize additional types), specify it with
    the ``cls`` kwarg; otherwise ``JSONEncoder`` is used.

    sutf-8RRRR	R
RRR
RN(RRtencodeR(RRRRR	RR
RRR
RR((s%/usr/lib64/python2.7/json/__init__.pyR�s/
$&
	tobject_hooktobject_pairs_hookc	Ks=t|j�d|d|d|d|d|d|d||�S(s�Deserialize ``fp`` (a ``.read()``-supporting file-like object containing
    a JSON document) to a Python object.

    If the contents of ``fp`` is encoded with an ASCII based encoding other
    than utf-8 (e.g. latin-1), then an appropriate ``encoding`` name must
    be specified. Encodings that are not ASCII based (such as UCS-2) are
    not allowed, and should be wrapped with
    ``codecs.getreader(fp)(encoding)``, or simply decoded to a ``unicode``
    object and passed to ``loads()``

    ``object_hook`` is an optional function that will be called with the
    result of any object literal decode (a ``dict``). The return value of
    ``object_hook`` will be used instead of the ``dict``. This feature
    can be used to implement custom decoders (e.g. JSON-RPC class hinting).

    ``object_pairs_hook`` is an optional function that will be called with the
    result of any object literal decoded with an ordered list of pairs.  The
    return value of ``object_pairs_hook`` will be used instead of the ``dict``.
    This feature can be used to implement custom decoders that rely on the
    order that the key and value pairs are decoded (for example,
    collections.OrderedDict will remember the order of insertion). If
    ``object_hook`` is also defined, the ``object_pairs_hook`` takes priority.

    To use a custom ``JSONDecoder`` subclass, specify it with the ``cls``
    kwarg; otherwise ``JSONDecoder`` is used.

    RRRtparse_floatt	parse_inttparse_constantR(Rtread(	RRRRRRRRR((s%/usr/lib64/python2.7/json/__init__.pyRs
	c	Ks|dkrh|dkrh|dkrh|dkrh|dkrh|dkrh|dkrh|rhtj|�S|dkr}t}n|dk	r�||d<n|dk	r�||d<n|dk	r�||d<n|dk	r�||d<n|dk	r�||d<n|d||�j|�S(s�Deserialize ``s`` (a ``str`` or ``unicode`` instance containing a JSON
    document) to a Python object.

    If ``s`` is a ``str`` instance and is encoded with an ASCII based encoding
    other than utf-8 (e.g. latin-1) then an appropriate ``encoding`` name
    must be specified. Encodings that are not ASCII based (such as UCS-2)
    are not allowed and should be decoded to ``unicode`` first.

    ``object_hook`` is an optional function that will be called with the
    result of any object literal decode (a ``dict``). The return value of
    ``object_hook`` will be used instead of the ``dict``. This feature
    can be used to implement custom decoders (e.g. JSON-RPC class hinting).

    ``object_pairs_hook`` is an optional function that will be called with the
    result of any object literal decoded with an ordered list of pairs.  The
    return value of ``object_pairs_hook`` will be used instead of the ``dict``.
    This feature can be used to implement custom decoders that rely on the
    order that the key and value pairs are decoded (for example,
    collections.OrderedDict will remember the order of insertion). If
    ``object_hook`` is also defined, the ``object_pairs_hook`` takes priority.

    ``parse_float``, if specified, will be called with the string
    of every JSON float to be decoded. By default this is equivalent to
    float(num_str). This can be used to use another datatype or parser
    for JSON floats (e.g. decimal.Decimal).

    ``parse_int``, if specified, will be called with the string
    of every JSON int to be decoded. By default this is equivalent to
    int(num_str). This can be used to use another datatype or parser
    for JSON integers (e.g. float).

    ``parse_constant``, if specified, will be called with one of the
    following strings: -Infinity, Infinity, NaN.
    This can be used to raise an exception if invalid JSON numbers
    are encountered.

    To use a custom ``JSONDecoder`` subclass, specify it with the ``cls``
    kwarg; otherwise ``JSONDecoder`` is used.

    RRRRRRN(Rt_default_decodertdecodeR(	tsRRRRRRRR((s%/usr/lib64/python2.7/json/__init__.pyR&s"*$
	




N(t__doc__t__version__t__all__t
__author__tdecoderRtencoderRtFalsetTrueRRRRR RR(((s%/usr/lib64/python2.7/json/__init__.pyt<module>cs6		E	;	#�
{fc@sAdZddlZddlZd�Zedkr=e�ndS(sCommand-line tool to validate and pretty-print JSON

Usage::

    $ echo '{"json":"obj"}' | python -m json.tool
    {
        "json": "obj"
    }
    $ echo '{ 1.2:3.4}' | python -m json.tool
    Expecting property name enclosed in double quotes: line 1 column 3 (char 2)

i����NcCs>ttj�dkr*tj}tj}n�ttj�dkrattjdd�}tj}n[ttj�dkr�ttjdd�}ttjdd�}nttjdd��|�:ytj|�}Wnt	k
r�}t|��nXWdQX|�4tj
||dtd	d
dd�|jd�WdQXdS(Niitrbitwbis [infile [outfile]]t	sort_keystindentit
separatorst,s: s
(Rs: (
tlentsystargvtstdintstdouttopent
SystemExittjsontloadt
ValueErrortdumptTruetwrite(tinfiletoutfiletobjte((s!/usr/lib64/python2.7/json/tool.pytmains&	
t__main__(t__doc__RR
Rt__name__(((s!/usr/lib64/python2.7/json/tool.pyt<module>s
	�
{fc@s�dZddlZddlZddlZddlmZyddlmZWne	k
rgdZnXdgZejej
BejBZd�Ze�\ZZZd�Zdd�Zied	6ed
6ed6Zejde�Zid
d6dd6dd6dd6dd6dd6dd6dd6ZdZd�Zdeeejd�ZepSeZejd e�Zd!Z eje d"�Z!eje d#�Z"de#fd$��YZ$dS(%sImplementation of JSONDecoder
i����N(tscanner(t
scanstringtJSONDecodercCs8tjdd�\}tjdd�\}|||fS(Ns>ds�s�(tstructtunpack(tnantinf((s$/usr/lib64/python2.7/json/decoder.pyt_floatconstantsscCsU|jdd|�d}|dkr2|d}n||jdd|�}||fS(Ns
ii(tcounttrindex(tdoctpostlinenotcolno((s$/usr/lib64/python2.7/json/decoder.pytlinecols

c	Cswt||�\}}|dkr=d}|j||||�St||�\}}d}|j|||||||�S(Ns#{0}: line {1} column {2} (char {3})s?{0}: line {1} column {2} - line {3} column {4} (char {5} - {6})(RtNonetformat(	tmsgR
RtendRR
tfmtt	endlinenotendcolno((s$/usr/lib64/python2.7/json/decoder.pyterrmsg"ss	-InfinitytInfinitytNaNs(.*?)(["\\\x00-\x1f])u"t"u\s\u/t/utbutfu
tnu
tru	ttsutf-8cCs�||d|d!}t|�dkr_|ddkr_yt|d�SWq_tk
r[q_Xnd}tt|||���dS(NiiitxXisInvalid \uXXXX escape(tlentintt
ValueErrorR(tsRtescR((s$/usr/lib64/python2.7/json/decoder.pyt
_decode_uXXXX?s"
cCs�|dkrt}ng}|j}|d}xO|||�}	|	dkrgttd||���n|	j�}|	j�\}
}|
r�t|
t�s�t|
|�}
n||
�n|dkr�PnL|dkr|rdj	|�}tt|||���q||�q1ny||}
Wn)t
k
rNttd||���nX|
dkr�y||
}Wn9tk
r�dt|
�}tt|||���nX|d7}n�t
||�}|d7}tjd	krfd
|ko�dknrf|||d!d
krft
||d�}d|ko7dknrfd|d
d>|dB}|d7}qfnt|�}||�q1Wdj|�|fS(s�Scan the string s for a JSON string. End is the index of the
    character in s after the quote that started the JSON string.
    Unescapes all valid JSON string escape sequences and raises ValueError
    on attempt to decode an invalid string. If strict is False then literal
    control characters are allowed in the string.

    Returns a tuple of the decoded string and the index of the character in s
    after the end quote.isUnterminated string starting atRs\s"Invalid control character {0!r} attusInvalid \escape: ii��i�i��is\ui�i��ii
iuN(RtDEFAULT_ENCODINGtappendR#RRtgroupst
isinstancetunicodeRt
IndexErrortKeyErrortreprR&tsyst
maxunicodetunichrtjoin(R$Rtencodingtstrictt_bt_mtchunkst_appendtbegintchunktcontentt
terminatorRR%tchartunituni2((s$/usr/lib64/python2.7/json/decoder.pyt
py_scanstringIs^
		






3s
[ \t\n\r]*s 	

cCs�|\}}	g}
|
j}||	|	d!}|dkr�||krm|||	�j�}	||	|	d!}n|dkr�|dk	r�||
�}
|
|	dfSi}
|dk	r�||
�}
n|
|	dfS|dkr�ttd||	���q�n|	d7}	x�tr�t||	||�\}}	||	|	d!dkr�|||	�j�}	||	|	d!dkr�ttd||	���q�n|	d7}	yM||	|kr�|	d7}	||	|kr�|||	d�j�}	q�nWntk
r�nXy|||	�\}}	Wn)tk
r6ttd||	���nX|||f�y@||	}||kr�|||	d�j�}	||	}nWntk
r�d}nX|	d7}	|dkr�Pn+|d	kr�ttd
||	d���nyc||	}||krH|	d7}	||	}||krH|||	d�j�}	||	}qHnWntk
rbd}nX|	d7}	|dkrttd||	d���qqW|dk	r�||
�}
|
|	fSt	|
�}
|dk	r�||
�}
n|
|	fS(NiRt}s1Expecting property name enclosed in double quotest:sExpecting ':' delimitersExpecting objecttt,sExpecting ',' delimiter(
R)RRR#RtTrueRR-t
StopIterationtdict(t	s_and_endR4R5t	scan_oncetobject_hooktobject_pairs_hookt_wt_wsR$Rtpairstpairs_appendtnextchartresulttkeytvalue((s$/usr/lib64/python2.7/json/decoder.pyt
JSONObject�s�	
	

#












c
Cs�|\}}g}|||d!}||kr\|||d�j�}|||d!}n|dkrv||dfS|j}xEtr�y|||�\}	}Wn)tk
r�ttd||���nX||	�|||d!}||kr!|||d�j�}|||d!}n|d7}|dkr;Pn'|dkrbttd||���nyM|||kr�|d7}|||kr�|||d�j�}q�nWq�tk
r�q�Xq�W||fS(Nit]sExpecting objectREsExpecting ',' delimiter(RR)RFRGR#RR-(
RIRJRMRNR$RtvaluesRQR9RT((s$/usr/lib64/python2.7/json/decoder.pyt	JSONArray�s@		



#
cBsGeZdZdddddedd�Zejd�Zdd�Z	RS(sSimple JSON <http://json.org> decoder

    Performs the following translations in decoding by default:

    +---------------+-------------------+
    | JSON          | Python            |
    +===============+===================+
    | object        | dict              |
    +---------------+-------------------+
    | array         | list              |
    +---------------+-------------------+
    | string        | unicode           |
    +---------------+-------------------+
    | number (int)  | int, long         |
    +---------------+-------------------+
    | number (real) | float             |
    +---------------+-------------------+
    | true          | True              |
    +---------------+-------------------+
    | false         | False             |
    +---------------+-------------------+
    | null          | None              |
    +---------------+-------------------+

    It also understands ``NaN``, ``Infinity``, and ``-Infinity`` as
    their corresponding ``float`` values, which is outside the JSON spec.

    cCs�||_||_||_|p$t|_|p3t|_|pEtj|_	||_
t|_t
|_t|_tj|�|_dS(s�``encoding`` determines the encoding used to interpret any ``str``
        objects decoded by this instance (utf-8 by default).  It has no
        effect when decoding ``unicode`` objects.

        Note that currently only encodings that are a superset of ASCII work,
        strings of other encodings should be passed in as ``unicode``.

        ``object_hook``, if specified, will be called with the result
        of every JSON object decoded and its return value will be used in
        place of the given ``dict``.  This can be used to provide custom
        deserializations (e.g. to support JSON-RPC class hinting).

        ``object_pairs_hook``, if specified will be called with the result of
        every JSON object decoded with an ordered list of pairs.  The return
        value of ``object_pairs_hook`` will be used instead of the ``dict``.
        This feature can be used to implement custom decoders that rely on the
        order that the key and value pairs are decoded (for example,
        collections.OrderedDict will remember the order of insertion). If
        ``object_hook`` is also defined, the ``object_pairs_hook`` takes
        priority.

        ``parse_float``, if specified, will be called with the string
        of every JSON float to be decoded. By default this is equivalent to
        float(num_str). This can be used to use another datatype or parser
        for JSON floats (e.g. decimal.Decimal).

        ``parse_int``, if specified, will be called with the string
        of every JSON int to be decoded. By default this is equivalent to
        int(num_str). This can be used to use another datatype or parser
        for JSON integers (e.g. float).

        ``parse_constant``, if specified, will be called with one of the
        following strings: -Infinity, Infinity, NaN.
        This can be used to raise an exception if invalid JSON numbers
        are encountered.

        If ``strict`` is false (true is the default), then control
        characters will be allowed inside strings.  Control characters in
        this context are those with character codes in the 0-31 range,
        including ``'\t'`` (tab), ``'\n'``, ``'\r'`` and ``'\0'``.

        N(R4RKRLtfloattparse_floatR"t	parse_intt
_CONSTANTSt__getitem__tparse_constantR5RUtparse_objectRXtparse_arrayRtparse_stringRtmake_scannerRJ(tselfR4RKRZR[R^R5RL((s$/usr/lib64/python2.7/json/decoder.pyt__init__.s-							cCsy|j|d||d�j��\}}|||�j�}|t|�kruttd||t|����n|S(szReturn the Python representation of ``s`` (a ``str`` or ``unicode``
        instance containing a JSON document)

        tidxis
Extra data(t
raw_decodeRR!R#R(RcR$RMtobjR((s$/usr/lib64/python2.7/json/decoder.pytdecodegs
*$icCsFy|j||�\}}Wntk
r;td��nX||fS(sLDecode a JSON document from ``s`` (a ``str`` or ``unicode``
        beginning with a JSON document) and return a 2-tuple of the Python
        representation and the index in ``s`` where the document ended.

        This can be used to decode a JSON document from a string that may
        have extraneous data at the end.

        sNo JSON object could be decoded(RJRGR#(RcR$ReRgR((s$/usr/lib64/python2.7/json/decoder.pyRfrs
	
N(
t__name__t
__module__t__doc__RRFRdt
WHITESPACEtmatchRhRf(((s$/usr/lib64/python2.7/json/decoder.pyRs		7(%RktreR0RtjsonRt_jsonRtc_scanstringtImportErrorRt__all__tVERBOSEt	MULTILINEtDOTALLtFLAGSRRtPosInftNegInfRRR\tcompiletSTRINGCHUNKt	BACKSLASHR(R&RFRmRARltWHITESPACE_STRRURXtobjectR(((s$/usr/lib64/python2.7/json/decoder.pyt<module>s@

				
&	
EW$r"""JSON (JavaScript Object Notation) <http://json.org> is a subset of
JavaScript syntax (ECMA-262 3rd edition) used as a lightweight data
interchange format.

:mod:`json` exposes an API familiar to users of the standard library
:mod:`marshal` and :mod:`pickle` modules. It is the externally maintained
version of the :mod:`json` library contained in Python 2.6, but maintains
compatibility with Python 2.4 and Python 2.5 and (currently) has
significant performance advantages, even without using the optional C
extension for speedups.

Encoding basic Python object hierarchies::

    >>> import json
    >>> json.dumps(['foo', {'bar': ('baz', None, 1.0, 2)}])
    '["foo", {"bar": ["baz", null, 1.0, 2]}]'
    >>> print json.dumps("\"foo\bar")
    "\"foo\bar"
    >>> print json.dumps(u'\u1234')
    "\u1234"
    >>> print json.dumps('\\')
    "\\"
    >>> print json.dumps({"c": 0, "b": 0, "a": 0}, sort_keys=True)
    {"a": 0, "b": 0, "c": 0}
    >>> from StringIO import StringIO
    >>> io = StringIO()
    >>> json.dump(['streaming API'], io)
    >>> io.getvalue()
    '["streaming API"]'

Compact encoding::

    >>> import json
    >>> json.dumps([1,2,3,{'4': 5, '6': 7}], sort_keys=True, separators=(',',':'))
    '[1,2,3,{"4":5,"6":7}]'

Pretty printing::

    >>> import json
    >>> print json.dumps({'4': 5, '6': 7}, sort_keys=True,
    ...                  indent=4, separators=(',', ': '))
    {
        "4": 5,
        "6": 7
    }

Decoding JSON::

    >>> import json
    >>> obj = [u'foo', {u'bar': [u'baz', None, 1.0, 2]}]
    >>> json.loads('["foo", {"bar":["baz", null, 1.0, 2]}]') == obj
    True
    >>> json.loads('"\\"foo\\bar"') == u'"foo\x08ar'
    True
    >>> from StringIO import StringIO
    >>> io = StringIO('["streaming API"]')
    >>> json.load(io)[0] == 'streaming API'
    True

Specializing JSON object decoding::

    >>> import json
    >>> def as_complex(dct):
    ...     if '__complex__' in dct:
    ...         return complex(dct['real'], dct['imag'])
    ...     return dct
    ...
    >>> json.loads('{"__complex__": true, "real": 1, "imag": 2}',
    ...     object_hook=as_complex)
    (1+2j)
    >>> from decimal import Decimal
    >>> json.loads('1.1', parse_float=Decimal) == Decimal('1.1')
    True

Specializing JSON object encoding::

    >>> import json
    >>> def encode_complex(obj):
    ...     if isinstance(obj, complex):
    ...         return [obj.real, obj.imag]
    ...     raise TypeError(repr(obj) + " is not JSON serializable")
    ...
    >>> json.dumps(2 + 1j, default=encode_complex)
    '[2.0, 1.0]'
    >>> json.JSONEncoder(default=encode_complex).encode(2 + 1j)
    '[2.0, 1.0]'
    >>> ''.join(json.JSONEncoder(default=encode_complex).iterencode(2 + 1j))
    '[2.0, 1.0]'


Using json.tool from the shell to validate and pretty-print::

    $ echo '{"json":"obj"}' | python -m json.tool
    {
        "json": "obj"
    }
    $ echo '{ 1.2:3.4}' | python -m json.tool
    Expecting property name enclosed in double quotes: line 1 column 3 (char 2)
"""
__version__ = '2.0.9'
__all__ = [
    'dump', 'dumps', 'load', 'loads',
    'JSONDecoder', 'JSONEncoder',
]

__author__ = 'Bob Ippolito <bob@redivi.com>'

from .decoder import JSONDecoder
from .encoder import JSONEncoder

_default_encoder = JSONEncoder(
    skipkeys=False,
    ensure_ascii=True,
    check_circular=True,
    allow_nan=True,
    indent=None,
    separators=None,
    encoding='utf-8',
    default=None,
)

def dump(obj, fp, skipkeys=False, ensure_ascii=True, check_circular=True,
        allow_nan=True, cls=None, indent=None, separators=None,
        encoding='utf-8', default=None, sort_keys=False, **kw):
    """Serialize ``obj`` as a JSON formatted stream to ``fp`` (a
    ``.write()``-supporting file-like object).

    If ``skipkeys`` is true then ``dict`` keys that are not basic types
    (``str``, ``unicode``, ``int``, ``long``, ``float``, ``bool``, ``None``)
    will be skipped instead of raising a ``TypeError``.

    If ``ensure_ascii`` is true (the default), all non-ASCII characters in the
    output are escaped with ``\uXXXX`` sequences, and the result is a ``str``
    instance consisting of ASCII characters only.  If ``ensure_ascii`` is
    false, some chunks written to ``fp`` may be ``unicode`` instances.
    This usually happens because the input contains unicode strings or the
    ``encoding`` parameter is used. Unless ``fp.write()`` explicitly
    understands ``unicode`` (as in ``codecs.getwriter``) this is likely to
    cause an error.

    If ``check_circular`` is false, then the circular reference check
    for container types will be skipped and a circular reference will
    result in an ``OverflowError`` (or worse).

    If ``allow_nan`` is false, then it will be a ``ValueError`` to
    serialize out of range ``float`` values (``nan``, ``inf``, ``-inf``)
    in strict compliance of the JSON specification, instead of using the
    JavaScript equivalents (``NaN``, ``Infinity``, ``-Infinity``).

    If ``indent`` is a non-negative integer, then JSON array elements and
    object members will be pretty-printed with that indent level. An indent
    level of 0 will only insert newlines. ``None`` is the most compact
    representation.  Since the default item separator is ``', '``,  the
    output might include trailing whitespace when ``indent`` is specified.
    You can use ``separators=(',', ': ')`` to avoid this.

    If ``separators`` is an ``(item_separator, dict_separator)`` tuple
    then it will be used instead of the default ``(', ', ': ')`` separators.
    ``(',', ':')`` is the most compact JSON representation.

    ``encoding`` is the character encoding for str instances, default is UTF-8.

    ``default(obj)`` is a function that should return a serializable version
    of obj or raise TypeError. The default simply raises TypeError.

    If *sort_keys* is true (default: ``False``), then the output of
    dictionaries will be sorted by key.

    To use a custom ``JSONEncoder`` subclass (e.g. one that overrides the
    ``.default()`` method to serialize additional types), specify it with
    the ``cls`` kwarg; otherwise ``JSONEncoder`` is used.

    """
    # cached encoder
    if (not skipkeys and ensure_ascii and
        check_circular and allow_nan and
        cls is None and indent is None and separators is None and
        encoding == 'utf-8' and default is None and not sort_keys and not kw):
        iterable = _default_encoder.iterencode(obj)
    else:
        if cls is None:
            cls = JSONEncoder
        iterable = cls(skipkeys=skipkeys, ensure_ascii=ensure_ascii,
            check_circular=check_circular, allow_nan=allow_nan, indent=indent,
            separators=separators, encoding=encoding,
            default=default, sort_keys=sort_keys, **kw).iterencode(obj)
    # could accelerate with writelines in some versions of Python, at
    # a debuggability cost
    for chunk in iterable:
        fp.write(chunk)


def dumps(obj, skipkeys=False, ensure_ascii=True, check_circular=True,
        allow_nan=True, cls=None, indent=None, separators=None,
        encoding='utf-8', default=None, sort_keys=False, **kw):
    """Serialize ``obj`` to a JSON formatted ``str``.

    If ``skipkeys`` is true then ``dict`` keys that are not basic types
    (``str``, ``unicode``, ``int``, ``long``, ``float``, ``bool``, ``None``)
    will be skipped instead of raising a ``TypeError``.


    If ``ensure_ascii`` is false, all non-ASCII characters are not escaped, and
    the return value may be a ``unicode`` instance. See ``dump`` for details.

    If ``check_circular`` is false, then the circular reference check
    for container types will be skipped and a circular reference will
    result in an ``OverflowError`` (or worse).

    If ``allow_nan`` is false, then it will be a ``ValueError`` to
    serialize out of range ``float`` values (``nan``, ``inf``, ``-inf``) in
    strict compliance of the JSON specification, instead of using the
    JavaScript equivalents (``NaN``, ``Infinity``, ``-Infinity``).

    If ``indent`` is a non-negative integer, then JSON array elements and
    object members will be pretty-printed with that indent level. An indent
    level of 0 will only insert newlines. ``None`` is the most compact
    representation.  Since the default item separator is ``', '``,  the
    output might include trailing whitespace when ``indent`` is specified.
    You can use ``separators=(',', ': ')`` to avoid this.

    If ``separators`` is an ``(item_separator, dict_separator)`` tuple
    then it will be used instead of the default ``(', ', ': ')`` separators.
    ``(',', ':')`` is the most compact JSON representation.

    ``encoding`` is the character encoding for str instances, default is UTF-8.

    ``default(obj)`` is a function that should return a serializable version
    of obj or raise TypeError. The default simply raises TypeError.

    If *sort_keys* is true (default: ``False``), then the output of
    dictionaries will be sorted by key.

    To use a custom ``JSONEncoder`` subclass (e.g. one that overrides the
    ``.default()`` method to serialize additional types), specify it with
    the ``cls`` kwarg; otherwise ``JSONEncoder`` is used.

    """
    # cached encoder
    if (not skipkeys and ensure_ascii and
        check_circular and allow_nan and
        cls is None and indent is None and separators is None and
        encoding == 'utf-8' and default is None and not sort_keys and not kw):
        return _default_encoder.encode(obj)
    if cls is None:
        cls = JSONEncoder
    return cls(
        skipkeys=skipkeys, ensure_ascii=ensure_ascii,
        check_circular=check_circular, allow_nan=allow_nan, indent=indent,
        separators=separators, encoding=encoding, default=default,
        sort_keys=sort_keys, **kw).encode(obj)


_default_decoder = JSONDecoder(encoding=None, object_hook=None,
                               object_pairs_hook=None)


def load(fp, encoding=None, cls=None, object_hook=None, parse_float=None,
        parse_int=None, parse_constant=None, object_pairs_hook=None, **kw):
    """Deserialize ``fp`` (a ``.read()``-supporting file-like object containing
    a JSON document) to a Python object.

    If the contents of ``fp`` is encoded with an ASCII based encoding other
    than utf-8 (e.g. latin-1), then an appropriate ``encoding`` name must
    be specified. Encodings that are not ASCII based (such as UCS-2) are
    not allowed, and should be wrapped with
    ``codecs.getreader(fp)(encoding)``, or simply decoded to a ``unicode``
    object and passed to ``loads()``

    ``object_hook`` is an optional function that will be called with the
    result of any object literal decode (a ``dict``). The return value of
    ``object_hook`` will be used instead of the ``dict``. This feature
    can be used to implement custom decoders (e.g. JSON-RPC class hinting).

    ``object_pairs_hook`` is an optional function that will be called with the
    result of any object literal decoded with an ordered list of pairs.  The
    return value of ``object_pairs_hook`` will be used instead of the ``dict``.
    This feature can be used to implement custom decoders that rely on the
    order that the key and value pairs are decoded (for example,
    collections.OrderedDict will remember the order of insertion). If
    ``object_hook`` is also defined, the ``object_pairs_hook`` takes priority.

    To use a custom ``JSONDecoder`` subclass, specify it with the ``cls``
    kwarg; otherwise ``JSONDecoder`` is used.

    """
    return loads(fp.read(),
        encoding=encoding, cls=cls, object_hook=object_hook,
        parse_float=parse_float, parse_int=parse_int,
        parse_constant=parse_constant, object_pairs_hook=object_pairs_hook,
        **kw)


def loads(s, encoding=None, cls=None, object_hook=None, parse_float=None,
        parse_int=None, parse_constant=None, object_pairs_hook=None, **kw):
    """Deserialize ``s`` (a ``str`` or ``unicode`` instance containing a JSON
    document) to a Python object.

    If ``s`` is a ``str`` instance and is encoded with an ASCII based encoding
    other than utf-8 (e.g. latin-1) then an appropriate ``encoding`` name
    must be specified. Encodings that are not ASCII based (such as UCS-2)
    are not allowed and should be decoded to ``unicode`` first.

    ``object_hook`` is an optional function that will be called with the
    result of any object literal decode (a ``dict``). The return value of
    ``object_hook`` will be used instead of the ``dict``. This feature
    can be used to implement custom decoders (e.g. JSON-RPC class hinting).

    ``object_pairs_hook`` is an optional function that will be called with the
    result of any object literal decoded with an ordered list of pairs.  The
    return value of ``object_pairs_hook`` will be used instead of the ``dict``.
    This feature can be used to implement custom decoders that rely on the
    order that the key and value pairs are decoded (for example,
    collections.OrderedDict will remember the order of insertion). If
    ``object_hook`` is also defined, the ``object_pairs_hook`` takes priority.

    ``parse_float``, if specified, will be called with the string
    of every JSON float to be decoded. By default this is equivalent to
    float(num_str). This can be used to use another datatype or parser
    for JSON floats (e.g. decimal.Decimal).

    ``parse_int``, if specified, will be called with the string
    of every JSON int to be decoded. By default this is equivalent to
    int(num_str). This can be used to use another datatype or parser
    for JSON integers (e.g. float).

    ``parse_constant``, if specified, will be called with one of the
    following strings: -Infinity, Infinity, NaN.
    This can be used to raise an exception if invalid JSON numbers
    are encountered.

    To use a custom ``JSONDecoder`` subclass, specify it with the ``cls``
    kwarg; otherwise ``JSONDecoder`` is used.

    """
    if (cls is None and encoding is None and object_hook is None and
            parse_int is None and parse_float is None and
            parse_constant is None and object_pairs_hook is None and not kw):
        return _default_decoder.decode(s)
    if cls is None:
        cls = JSONDecoder
    if object_hook is not None:
        kw['object_hook'] = object_hook
    if object_pairs_hook is not None:
        kw['object_pairs_hook'] = object_pairs_hook
    if parse_float is not None:
        kw['parse_float'] = parse_float
    if parse_int is not None:
        kw['parse_int'] = parse_int
    if parse_constant is not None:
        kw['parse_constant'] = parse_constant
    return cls(encoding=encoding, **kw).decode(s)
�
{fc@sdZddlZyddlmZWnek
r?dZnXyddlmZWnek
rmdZnXej	d�Z
ej	d�Zej	d�Zidd	6d
d6dd
6dd6dd6dd6dd6Z
x3ed�D]%Ze
jee�dje��q�Wed�ZejZd�Zd�Zep8eZdefd��YZeeeeeee e!e"e#e$d�Z%dS(sImplementation of JSONEncoder
i����N(tencode_basestring_ascii(tmake_encoders[\x00-\x1f\\"\b\f\n\r\t]s([\\"]|[^\ -~])s[\x80-\xff]s\\s\s\"t"s\bss\fss\ns
s\rs
s\ts	i s	\u{0:04x}tinfcCs!d�}dtj||�dS(s5Return a JSON representation of a Python string

    cSst|jd�S(Ni(t
ESCAPE_DCTtgroup(tmatch((s$/usr/lib64/python2.7/json/encoder.pytreplace%sR(tESCAPEtsub(tsR((s$/usr/lib64/python2.7/json/encoder.pytencode_basestring!s	cCs]t|t�r6tj|�dk	r6|jd�}nd�}dttj||��dS(sAReturn an ASCII-only JSON representation of a Python string

    sutf-8cSs�|jd�}yt|SWnptk
r�t|�}|dkrPdj|�S|d8}d|d?d@B}d|d@B}dj||�SnXdS(	Niis	\u{0:04x}i�i
i�i�s\u{0:04x}\u{1:04x}(RRtKeyErrortordtformat(RR
tnts1ts2((s$/usr/lib64/python2.7/json/encoder.pyR0s


RN(t
isinstancetstrtHAS_UTF8tsearchtNonetdecodetESCAPE_ASCIIR	(R
R((s$/usr/lib64/python2.7/json/encoder.pytpy_encode_basestring_ascii*s$	tJSONEncoderc
Bs\eZdZdZdZeeeeeddddd�	Zd�Z	d�Z
ed�ZRS(	sZExtensible JSON <http://json.org> encoder for Python data structures.

    Supports the following objects and types by default:

    +-------------------+---------------+
    | Python            | JSON          |
    +===================+===============+
    | dict              | object        |
    +-------------------+---------------+
    | list, tuple       | array         |
    +-------------------+---------------+
    | str, unicode      | string        |
    +-------------------+---------------+
    | int, long, float  | number        |
    +-------------------+---------------+
    | True              | true          |
    +-------------------+---------------+
    | False             | false         |
    +-------------------+---------------+
    | None              | null          |
    +-------------------+---------------+

    To extend this to recognize other objects, subclass and implement a
    ``.default()`` method with another method that returns a serializable
    object for ``o`` if possible, otherwise it should call the superclass
    implementation (to raise ``TypeError``).

    s, s: sutf-8c

Cs|||_||_||_||_||_||_|dk	rW|\|_|_n|	dk	ro|	|_	n||_
dS(s�	Constructor for JSONEncoder, with sensible defaults.

        If skipkeys is false, then it is a TypeError to attempt
        encoding of keys that are not str, int, long, float or None.  If
        skipkeys is True, such items are simply skipped.

        If *ensure_ascii* is true (the default), all non-ASCII
        characters in the output are escaped with \uXXXX sequences,
        and the results are str instances consisting of ASCII
        characters only.  If ensure_ascii is False, a result may be a
        unicode instance.  This usually happens if the input contains
        unicode strings or the *encoding* parameter is used.

        If check_circular is true, then lists, dicts, and custom encoded
        objects will be checked for circular references during encoding to
        prevent an infinite recursion (which would cause an OverflowError).
        Otherwise, no such check takes place.

        If allow_nan is true, then NaN, Infinity, and -Infinity will be
        encoded as such.  This behavior is not JSON specification compliant,
        but is consistent with most JavaScript based encoders and decoders.
        Otherwise, it will be a ValueError to encode such floats.

        If sort_keys is true, then the output of dictionaries will be
        sorted by key; this is useful for regression tests to ensure
        that JSON serializations can be compared on a day-to-day basis.

        If indent is a non-negative integer, then JSON array
        elements and object members will be pretty-printed with that
        indent level.  An indent level of 0 will only insert newlines.
        None is the most compact representation.  Since the default
        item separator is ', ',  the output might include trailing
        whitespace when indent is specified.  You can use
        separators=(',', ': ') to avoid this.

        If specified, separators should be a (item_separator, key_separator)
        tuple.  The default is (', ', ': ').  To get the most compact JSON
        representation you should specify (',', ':') to eliminate whitespace.

        If specified, default is a function that gets called for objects
        that can't otherwise be serialized.  It should return a JSON encodable
        version of the object or raise a ``TypeError``.

        If encoding is not None, then all input strings will be
        transformed into unicode using that encoding prior to JSON-encoding.
        The default is UTF-8.

        N(tskipkeystensure_asciitcheck_circulart	allow_nant	sort_keystindentRtitem_separatort
key_separatortdefaulttencoding(
tselfRRRRRR t
separatorsR$R#((s$/usr/lib64/python2.7/json/encoder.pyt__init__es4						cCstt|�d��dS(slImplement this method in a subclass such that it returns
        a serializable object for ``o``, or calls the base implementation
        (to raise a ``TypeError``).

        For example, to support arbitrary iterators, you could
        implement default like this::

            def default(self, o):
                try:
                    iterable = iter(o)
                except TypeError:
                    pass
                else:
                    return list(iterable)
                # Let the base class default method raise the TypeError
                return JSONEncoder.default(self, o)

        s is not JSON serializableN(t	TypeErrortrepr(R%to((s$/usr/lib64/python2.7/json/encoder.pyR#�scCs�t|t�rut|t�rU|j}|dk	rU|dkrU|j|�}qUn|jrht|�St|�Sn|j	|dt
�}t|ttf�s�t|�}ndj
|�S(s�Return a JSON string representation of a Python data structure.

        >>> JSONEncoder().encode({"foo": ["bar", "baz"]})
        '{"foo": ["bar", "baz"]}'

        sutf-8t	_one_shottN(Rt
basestringRR$RRRRRt
iterencodetTruetlistttupletjoin(R%R*t	_encodingtchunks((s$/usr/lib64/python2.7/json/encoder.pytencode�s	
	

cCs|jri}nd}|jr*t}nt}|jdkrT||jd�}n|jtttd�}|r�t	dk	r�|j
dkr�|jr�t	||j||j
|j
|j|j|j|j�	}n9t||j||j
||j
|j|j|j|�
}||d�S(s�Encode the given object and yield each string
        representation as available.

        For example::

            for chunk in JSONEncoder().iterencode(bigobject):
                mysocket.write(chunk)

        sutf-8cSs+t|t�r!|j|�}n||�S(N(RRR(R*t
_orig_encoderR3((s$/usr/lib64/python2.7/json/encoder.pyt_encoder�scSsl||krd}n4||kr*d}n||kr?d}n
||�S|shtdt|���n|S(NtNaNtInfinitys	-Infinitys2Out of range float values are not JSON compliant: (t
ValueErrorR)(R*Rt_reprt_inft_neginfttext((s$/usr/lib64/python2.7/json/encoder.pytfloatstr�s			
iN(RRRRRR$Rt
FLOAT_REPRtINFINITYtc_make_encoderR RR#R"R!Rt_make_iterencode(R%R*R+tmarkersR7R?t_iterencode((s$/usr/lib64/python2.7/json/encoder.pyR.�s*
				N(t__name__t
__module__t__doc__R!R"tFalseR/RR'R#R5R.(((s$/usr/lib64/python2.7/json/encoder.pyRFs	>		cs�����������
���������fd�����������	�
���
���������fd�����������
���������fd���S(Nc
3s8|sdVdS�dk	rO�|�}|�krB�d��n|�|<nd}�dk	r�|d7}dd�|}�|}||7}nd}�}t}xF|D]>}|r�t}n|}�
|��r�|�|�Vq�|dkr|dVq�|tkr|dVq�|tkr1|d	Vq��
|��f�rX|�|�Vq��
|�
�ry|�|�Vq�|V�
|��f�r��||�}n0�
|�	�r��||�}n�||�}x|D]}	|	Vq�Wq�W|dk	r|d8}dd�|Vnd
V�dk	r4�|=ndS(Ns[]sCircular reference detectedt[is
t tnullttruetfalset](RR/RI(
tlstt_current_indent_leveltmarkeridtbuftnewline_indentt	separatortfirsttvalueR4tchunk(R:R7t	_floatstrt_indentt_item_separatorREt_iterencode_dictt_iterencode_listR-tdicttfloattidtintRR0tlongRDRR1(s$/usr/lib64/python2.7/json/encoder.pyR] s^




	


c3s|sdVdS�dk	rO�|�}|�krB�d��n|�|<ndV�dk	r�|d7}dd�|}�|}|Vnd}�}t}�
r�t|j�dd��}n|j�}x�|D]�\}}�|��r�n��|�
�r�|�}n�|tkr(d	}nt|tkr=d
}n_|dkrRd}nJ�|��f�rv�|�}n&�	r�q�ntdt|�d
��|r�t}n|V�|�V�V�|��r��|�Vq�|dkr�dVq�|tkrd	Vq�|tkrd
Vq��|��f�r<�|�Vq��|�
�rY�|�Vq��|��f�r��||�}	n0�|��r��||�}	n�||�}	x|	D]}
|
Vq�Wq�W|dk	r�|d8}dd�|VndV�dk	r�|=ndS(Ns{}sCircular reference detectedt{is
RKtkeycSs|dS(Ni((tkv((s$/usr/lib64/python2.7/json/encoder.pyt<lambda>iR,RMRNRLskey s is not a stringt}(RR/tsortedtitemst	iteritemsRIR(R)(tdctRQRRRTR!RVRiRdRWR4RX(R:R7RYRZR[RER\R]t_key_separatort	_skipkeyst
_sort_keysR-R^R_R`RaRR0RbRDRR1(s$/usr/lib64/python2.7/json/encoder.pyR\Us�


				


c3s��|��r�|�Vne|dkr1dVnQ|tkrEdVn=|tkrYdVn)�|��f�r|�|�Vn�|�	�r��|�Vn��|�
�f�r�x��||�D]}|Vq�Wn��|��rx��||�D]}|Vq�Wn��dk	rA�
|�}|�kr4�d��n|�|<n�|�}x�||�D]}|Vq]W�dk	r��|=ndS(NRLRMRNsCircular reference detected(RR/RI(R*RQRXRR(R:t_defaultR7RYRER\R]R-R^R_R`RaRR0RbRDRR1(s$/usr/lib64/python2.7/json/encoder.pyRE�s8
	((RDRoR7RZRYRlR[RnRmR+R:R-R^R_R`RaRR0RbRR1((R:RoR7RYRZR[RER\R]RlRmRnR-R^R_R`RaRR0RbRDRR1s$/usr/lib64/python2.7/json/encoder.pyRCsE5NLB(&RHtret_jsonRtc_encode_basestring_asciitImportErrorRRRBtcompileRRRRtrangetit
setdefaulttchrRR_RAt__repr__R@RRtobjectRR:R-R^R`RaRR0RbRR1RC(((s$/usr/lib64/python2.7/json/encoder.pyt<module>sN




#				��
{fc@s,dZdZddddddgZdZd	d
lmZd	dlmZeded
e	de	de	dddddddd�Zee	e	e	ddddded�
Zee	e	e	ddddded�
Z
edddddd�Zdddddddd�Zdddddddd�ZdS(s�JSON (JavaScript Object Notation) <http://json.org> is a subset of
JavaScript syntax (ECMA-262 3rd edition) used as a lightweight data
interchange format.

:mod:`json` exposes an API familiar to users of the standard library
:mod:`marshal` and :mod:`pickle` modules. It is the externally maintained
version of the :mod:`json` library contained in Python 2.6, but maintains
compatibility with Python 2.4 and Python 2.5 and (currently) has
significant performance advantages, even without using the optional C
extension for speedups.

Encoding basic Python object hierarchies::

    >>> import json
    >>> json.dumps(['foo', {'bar': ('baz', None, 1.0, 2)}])
    '["foo", {"bar": ["baz", null, 1.0, 2]}]'
    >>> print json.dumps("\"foo\bar")
    "\"foo\bar"
    >>> print json.dumps(u'\u1234')
    "\u1234"
    >>> print json.dumps('\\')
    "\\"
    >>> print json.dumps({"c": 0, "b": 0, "a": 0}, sort_keys=True)
    {"a": 0, "b": 0, "c": 0}
    >>> from StringIO import StringIO
    >>> io = StringIO()
    >>> json.dump(['streaming API'], io)
    >>> io.getvalue()
    '["streaming API"]'

Compact encoding::

    >>> import json
    >>> json.dumps([1,2,3,{'4': 5, '6': 7}], sort_keys=True, separators=(',',':'))
    '[1,2,3,{"4":5,"6":7}]'

Pretty printing::

    >>> import json
    >>> print json.dumps({'4': 5, '6': 7}, sort_keys=True,
    ...                  indent=4, separators=(',', ': '))
    {
        "4": 5,
        "6": 7
    }

Decoding JSON::

    >>> import json
    >>> obj = [u'foo', {u'bar': [u'baz', None, 1.0, 2]}]
    >>> json.loads('["foo", {"bar":["baz", null, 1.0, 2]}]') == obj
    True
    >>> json.loads('"\\"foo\\bar"') == u'"foo\x08ar'
    True
    >>> from StringIO import StringIO
    >>> io = StringIO('["streaming API"]')
    >>> json.load(io)[0] == 'streaming API'
    True

Specializing JSON object decoding::

    >>> import json
    >>> def as_complex(dct):
    ...     if '__complex__' in dct:
    ...         return complex(dct['real'], dct['imag'])
    ...     return dct
    ...
    >>> json.loads('{"__complex__": true, "real": 1, "imag": 2}',
    ...     object_hook=as_complex)
    (1+2j)
    >>> from decimal import Decimal
    >>> json.loads('1.1', parse_float=Decimal) == Decimal('1.1')
    True

Specializing JSON object encoding::

    >>> import json
    >>> def encode_complex(obj):
    ...     if isinstance(obj, complex):
    ...         return [obj.real, obj.imag]
    ...     raise TypeError(repr(obj) + " is not JSON serializable")
    ...
    >>> json.dumps(2 + 1j, default=encode_complex)
    '[2.0, 1.0]'
    >>> json.JSONEncoder(default=encode_complex).encode(2 + 1j)
    '[2.0, 1.0]'
    >>> ''.join(json.JSONEncoder(default=encode_complex).iterencode(2 + 1j))
    '[2.0, 1.0]'


Using json.tool from the shell to validate and pretty-print::

    $ echo '{"json":"obj"}' | python -m json.tool
    {
        "json": "obj"
    }
    $ echo '{ 1.2:3.4}' | python -m json.tool
    Expecting property name enclosed in double quotes: line 1 column 3 (char 2)
s2.0.9tdumptdumpstloadtloadstJSONDecodertJSONEncodersBob Ippolito <bob@redivi.com>i(R(Rtskipkeystensure_asciitcheck_circulart	allow_nantindentt
separatorstencodingsutf-8tdefaultcKs�|ru|ru|ru|ru|dkru|dkru|dkru|	dkru|
dkru|ru|rutj|�}
n`|dkr�t}n|d|d|d|d|d|d|d|	d	|
d
||�	j|�}
x|
D]}|j|�q�WdS(s�	Serialize ``obj`` as a JSON formatted stream to ``fp`` (a
    ``.write()``-supporting file-like object).

    If ``skipkeys`` is true then ``dict`` keys that are not basic types
    (``str``, ``unicode``, ``int``, ``long``, ``float``, ``bool``, ``None``)
    will be skipped instead of raising a ``TypeError``.

    If ``ensure_ascii`` is true (the default), all non-ASCII characters in the
    output are escaped with ``\uXXXX`` sequences, and the result is a ``str``
    instance consisting of ASCII characters only.  If ``ensure_ascii`` is
    false, some chunks written to ``fp`` may be ``unicode`` instances.
    This usually happens because the input contains unicode strings or the
    ``encoding`` parameter is used. Unless ``fp.write()`` explicitly
    understands ``unicode`` (as in ``codecs.getwriter``) this is likely to
    cause an error.

    If ``check_circular`` is false, then the circular reference check
    for container types will be skipped and a circular reference will
    result in an ``OverflowError`` (or worse).

    If ``allow_nan`` is false, then it will be a ``ValueError`` to
    serialize out of range ``float`` values (``nan``, ``inf``, ``-inf``)
    in strict compliance of the JSON specification, instead of using the
    JavaScript equivalents (``NaN``, ``Infinity``, ``-Infinity``).

    If ``indent`` is a non-negative integer, then JSON array elements and
    object members will be pretty-printed with that indent level. An indent
    level of 0 will only insert newlines. ``None`` is the most compact
    representation.  Since the default item separator is ``', '``,  the
    output might include trailing whitespace when ``indent`` is specified.
    You can use ``separators=(',', ': ')`` to avoid this.

    If ``separators`` is an ``(item_separator, dict_separator)`` tuple
    then it will be used instead of the default ``(', ', ': ')`` separators.
    ``(',', ':')`` is the most compact JSON representation.

    ``encoding`` is the character encoding for str instances, default is UTF-8.

    ``default(obj)`` is a function that should return a serializable version
    of obj or raise TypeError. The default simply raises TypeError.

    If *sort_keys* is true (default: ``False``), then the output of
    dictionaries will be sorted by key.

    To use a custom ``JSONEncoder`` subclass (e.g. one that overrides the
    ``.default()`` method to serialize additional types), specify it with
    the ``cls`` kwarg; otherwise ``JSONEncoder`` is used.

    sutf-8RRRR	R
RRR
t	sort_keysN(tNonet_default_encodert
iterencodeRtwrite(tobjtfpRRRR	tclsR
RRR
Rtkwtiterabletchunk((s%/usr/lib64/python2.7/json/__init__.pyRzs5
$&	
cKs�|rp|rp|rp|rp|dkrp|dkrp|dkrp|dkrp|	dkrp|
rp|rptj|�S|dkr�t}n|d|d|d|d|d|d|d|d	|	d
|
|�	j|�S(sSerialize ``obj`` to a JSON formatted ``str``.

    If ``skipkeys`` is true then ``dict`` keys that are not basic types
    (``str``, ``unicode``, ``int``, ``long``, ``float``, ``bool``, ``None``)
    will be skipped instead of raising a ``TypeError``.


    If ``ensure_ascii`` is false, all non-ASCII characters are not escaped, and
    the return value may be a ``unicode`` instance. See ``dump`` for details.

    If ``check_circular`` is false, then the circular reference check
    for container types will be skipped and a circular reference will
    result in an ``OverflowError`` (or worse).

    If ``allow_nan`` is false, then it will be a ``ValueError`` to
    serialize out of range ``float`` values (``nan``, ``inf``, ``-inf``) in
    strict compliance of the JSON specification, instead of using the
    JavaScript equivalents (``NaN``, ``Infinity``, ``-Infinity``).

    If ``indent`` is a non-negative integer, then JSON array elements and
    object members will be pretty-printed with that indent level. An indent
    level of 0 will only insert newlines. ``None`` is the most compact
    representation.  Since the default item separator is ``', '``,  the
    output might include trailing whitespace when ``indent`` is specified.
    You can use ``separators=(',', ': ')`` to avoid this.

    If ``separators`` is an ``(item_separator, dict_separator)`` tuple
    then it will be used instead of the default ``(', ', ': ')`` separators.
    ``(',', ':')`` is the most compact JSON representation.

    ``encoding`` is the character encoding for str instances, default is UTF-8.

    ``default(obj)`` is a function that should return a serializable version
    of obj or raise TypeError. The default simply raises TypeError.

    If *sort_keys* is true (default: ``False``), then the output of
    dictionaries will be sorted by key.

    To use a custom ``JSONEncoder`` subclass (e.g. one that overrides the
    ``.default()`` method to serialize additional types), specify it with
    the ``cls`` kwarg; otherwise ``JSONEncoder`` is used.

    sutf-8RRRR	R
RRR
RN(RRtencodeR(RRRRR	RR
RRR
RR((s%/usr/lib64/python2.7/json/__init__.pyR�s/
$&
	tobject_hooktobject_pairs_hookc	Ks=t|j�d|d|d|d|d|d|d||�S(s�Deserialize ``fp`` (a ``.read()``-supporting file-like object containing
    a JSON document) to a Python object.

    If the contents of ``fp`` is encoded with an ASCII based encoding other
    than utf-8 (e.g. latin-1), then an appropriate ``encoding`` name must
    be specified. Encodings that are not ASCII based (such as UCS-2) are
    not allowed, and should be wrapped with
    ``codecs.getreader(fp)(encoding)``, or simply decoded to a ``unicode``
    object and passed to ``loads()``

    ``object_hook`` is an optional function that will be called with the
    result of any object literal decode (a ``dict``). The return value of
    ``object_hook`` will be used instead of the ``dict``. This feature
    can be used to implement custom decoders (e.g. JSON-RPC class hinting).

    ``object_pairs_hook`` is an optional function that will be called with the
    result of any object literal decoded with an ordered list of pairs.  The
    return value of ``object_pairs_hook`` will be used instead of the ``dict``.
    This feature can be used to implement custom decoders that rely on the
    order that the key and value pairs are decoded (for example,
    collections.OrderedDict will remember the order of insertion). If
    ``object_hook`` is also defined, the ``object_pairs_hook`` takes priority.

    To use a custom ``JSONDecoder`` subclass, specify it with the ``cls``
    kwarg; otherwise ``JSONDecoder`` is used.

    RRRtparse_floatt	parse_inttparse_constantR(Rtread(	RRRRRRRRR((s%/usr/lib64/python2.7/json/__init__.pyRs
	c	Ks|dkrh|dkrh|dkrh|dkrh|dkrh|dkrh|dkrh|rhtj|�S|dkr}t}n|dk	r�||d<n|dk	r�||d<n|dk	r�||d<n|dk	r�||d<n|dk	r�||d<n|d||�j|�S(s�Deserialize ``s`` (a ``str`` or ``unicode`` instance containing a JSON
    document) to a Python object.

    If ``s`` is a ``str`` instance and is encoded with an ASCII based encoding
    other than utf-8 (e.g. latin-1) then an appropriate ``encoding`` name
    must be specified. Encodings that are not ASCII based (such as UCS-2)
    are not allowed and should be decoded to ``unicode`` first.

    ``object_hook`` is an optional function that will be called with the
    result of any object literal decode (a ``dict``). The return value of
    ``object_hook`` will be used instead of the ``dict``. This feature
    can be used to implement custom decoders (e.g. JSON-RPC class hinting).

    ``object_pairs_hook`` is an optional function that will be called with the
    result of any object literal decoded with an ordered list of pairs.  The
    return value of ``object_pairs_hook`` will be used instead of the ``dict``.
    This feature can be used to implement custom decoders that rely on the
    order that the key and value pairs are decoded (for example,
    collections.OrderedDict will remember the order of insertion). If
    ``object_hook`` is also defined, the ``object_pairs_hook`` takes priority.

    ``parse_float``, if specified, will be called with the string
    of every JSON float to be decoded. By default this is equivalent to
    float(num_str). This can be used to use another datatype or parser
    for JSON floats (e.g. decimal.Decimal).

    ``parse_int``, if specified, will be called with the string
    of every JSON int to be decoded. By default this is equivalent to
    int(num_str). This can be used to use another datatype or parser
    for JSON integers (e.g. float).

    ``parse_constant``, if specified, will be called with one of the
    following strings: -Infinity, Infinity, NaN.
    This can be used to raise an exception if invalid JSON numbers
    are encountered.

    To use a custom ``JSONDecoder`` subclass, specify it with the ``cls``
    kwarg; otherwise ``JSONDecoder`` is used.

    RRRRRRN(Rt_default_decodertdecodeR(	tsRRRRRRRR((s%/usr/lib64/python2.7/json/__init__.pyR&s"*$
	




N(t__doc__t__version__t__all__t
__author__tdecoderRtencoderRtFalsetTrueRRRRR RR(((s%/usr/lib64/python2.7/json/__init__.pyt<module>cs6		E	;	#"""JSON token scanner
"""
import re
try:
    from _json import make_scanner as c_make_scanner
except ImportError:
    c_make_scanner = None

__all__ = ['make_scanner']

NUMBER_RE = re.compile(
    r'(-?(?:0|[1-9]\d*))(\.\d+)?([eE][-+]?\d+)?',
    (re.VERBOSE | re.MULTILINE | re.DOTALL))

def py_make_scanner(context):
    parse_object = context.parse_object
    parse_array = context.parse_array
    parse_string = context.parse_string
    match_number = NUMBER_RE.match
    encoding = context.encoding
    strict = context.strict
    parse_float = context.parse_float
    parse_int = context.parse_int
    parse_constant = context.parse_constant
    object_hook = context.object_hook
    object_pairs_hook = context.object_pairs_hook

    def _scan_once(string, idx):
        try:
            nextchar = string[idx]
        except IndexError:
            raise StopIteration

        if nextchar == '"':
            return parse_string(string, idx + 1, encoding, strict)
        elif nextchar == '{':
            return parse_object((string, idx + 1), encoding, strict,
                _scan_once, object_hook, object_pairs_hook)
        elif nextchar == '[':
            return parse_array((string, idx + 1), _scan_once)
        elif nextchar == 'n' and string[idx:idx + 4] == 'null':
            return None, idx + 4
        elif nextchar == 't' and string[idx:idx + 4] == 'true':
            return True, idx + 4
        elif nextchar == 'f' and string[idx:idx + 5] == 'false':
            return False, idx + 5

        m = match_number(string, idx)
        if m is not None:
            integer, frac, exp = m.groups()
            if frac or exp:
                res = parse_float(integer + (frac or '') + (exp or ''))
            else:
                res = parse_int(integer)
            return res, m.end()
        elif nextchar == 'N' and string[idx:idx + 3] == 'NaN':
            return parse_constant('NaN'), idx + 3
        elif nextchar == 'I' and string[idx:idx + 8] == 'Infinity':
            return parse_constant('Infinity'), idx + 8
        elif nextchar == '-' and string[idx:idx + 9] == '-Infinity':
            return parse_constant('-Infinity'), idx + 9
        else:
            raise StopIteration

    return _scan_once

make_scanner = c_make_scanner or py_make_scanner
"""Implementation of JSONDecoder
"""
import re
import sys
import struct

from json import scanner
try:
    from _json import scanstring as c_scanstring
except ImportError:
    c_scanstring = None

__all__ = ['JSONDecoder']

FLAGS = re.VERBOSE | re.MULTILINE | re.DOTALL

def _floatconstants():
    nan, = struct.unpack('>d', b'\x7f\xf8\x00\x00\x00\x00\x00\x00')
    inf, = struct.unpack('>d', b'\x7f\xf0\x00\x00\x00\x00\x00\x00')
    return nan, inf, -inf

NaN, PosInf, NegInf = _floatconstants()


def linecol(doc, pos):
    lineno = doc.count('\n', 0, pos) + 1
    if lineno == 1:
        colno = pos + 1
    else:
        colno = pos - doc.rindex('\n', 0, pos)
    return lineno, colno


def errmsg(msg, doc, pos, end=None):
    # Note that this function is called from _json
    lineno, colno = linecol(doc, pos)
    if end is None:
        fmt = '{0}: line {1} column {2} (char {3})'
        return fmt.format(msg, lineno, colno, pos)
        #fmt = '%s: line %d column %d (char %d)'
        #return fmt % (msg, lineno, colno, pos)
    endlineno, endcolno = linecol(doc, end)
    fmt = '{0}: line {1} column {2} - line {3} column {4} (char {5} - {6})'
    return fmt.format(msg, lineno, colno, endlineno, endcolno, pos, end)
    #fmt = '%s: line %d column %d - line %d column %d (char %d - %d)'
    #return fmt % (msg, lineno, colno, endlineno, endcolno, pos, end)


_CONSTANTS = {
    '-Infinity': NegInf,
    'Infinity': PosInf,
    'NaN': NaN,
}

STRINGCHUNK = re.compile(r'(.*?)(["\\\x00-\x1f])', FLAGS)
BACKSLASH = {
    '"': u'"', '\\': u'\\', '/': u'/',
    'b': u'\b', 'f': u'\f', 'n': u'\n', 'r': u'\r', 't': u'\t',
}

DEFAULT_ENCODING = "utf-8"

def _decode_uXXXX(s, pos):
    esc = s[pos + 1:pos + 5]
    if len(esc) == 4 and esc[1] not in 'xX':
        try:
            return int(esc, 16)
        except ValueError:
            pass
    msg = "Invalid \\uXXXX escape"
    raise ValueError(errmsg(msg, s, pos))

def py_scanstring(s, end, encoding=None, strict=True,
        _b=BACKSLASH, _m=STRINGCHUNK.match):
    """Scan the string s for a JSON string. End is the index of the
    character in s after the quote that started the JSON string.
    Unescapes all valid JSON string escape sequences and raises ValueError
    on attempt to decode an invalid string. If strict is False then literal
    control characters are allowed in the string.

    Returns a tuple of the decoded string and the index of the character in s
    after the end quote."""
    if encoding is None:
        encoding = DEFAULT_ENCODING
    chunks = []
    _append = chunks.append
    begin = end - 1
    while 1:
        chunk = _m(s, end)
        if chunk is None:
            raise ValueError(
                errmsg("Unterminated string starting at", s, begin))
        end = chunk.end()
        content, terminator = chunk.groups()
        # Content is contains zero or more unescaped string characters
        if content:
            if not isinstance(content, unicode):
                content = unicode(content, encoding)
            _append(content)
        # Terminator is the end of string, a literal control character,
        # or a backslash denoting that an escape sequence follows
        if terminator == '"':
            break
        elif terminator != '\\':
            if strict:
                #msg = "Invalid control character %r at" % (terminator,)
                msg = "Invalid control character {0!r} at".format(terminator)
                raise ValueError(errmsg(msg, s, end))
            else:
                _append(terminator)
                continue
        try:
            esc = s[end]
        except IndexError:
            raise ValueError(
                errmsg("Unterminated string starting at", s, begin))
        # If not a unicode escape sequence, must be in the lookup table
        if esc != 'u':
            try:
                char = _b[esc]
            except KeyError:
                msg = "Invalid \\escape: " + repr(esc)
                raise ValueError(errmsg(msg, s, end))
            end += 1
        else:
            # Unicode escape sequence
            uni = _decode_uXXXX(s, end)
            end += 5
            # Check for surrogate pair on UCS-4 systems
            if sys.maxunicode > 65535 and \
               0xd800 <= uni <= 0xdbff and s[end:end + 2] == '\\u':
                uni2 = _decode_uXXXX(s, end + 1)
                if 0xdc00 <= uni2 <= 0xdfff:
                    uni = 0x10000 + (((uni - 0xd800) << 10) | (uni2 - 0xdc00))
                    end += 6
            char = unichr(uni)
        # Append the unescaped character
        _append(char)
    return u''.join(chunks), end


# Use speedup if available
scanstring = c_scanstring or py_scanstring

WHITESPACE = re.compile(r'[ \t\n\r]*', FLAGS)
WHITESPACE_STR = ' \t\n\r'

def JSONObject(s_and_end, encoding, strict, scan_once, object_hook,
               object_pairs_hook, _w=WHITESPACE.match, _ws=WHITESPACE_STR):
    s, end = s_and_end
    pairs = []
    pairs_append = pairs.append
    # Use a slice to prevent IndexError from being raised, the following
    # check will raise a more specific ValueError if the string is empty
    nextchar = s[end:end + 1]
    # Normally we expect nextchar == '"'
    if nextchar != '"':
        if nextchar in _ws:
            end = _w(s, end).end()
            nextchar = s[end:end + 1]
        # Trivial empty object
        if nextchar == '}':
            if object_pairs_hook is not None:
                result = object_pairs_hook(pairs)
                return result, end + 1
            pairs = {}
            if object_hook is not None:
                pairs = object_hook(pairs)
            return pairs, end + 1
        elif nextchar != '"':
            raise ValueError(errmsg(
                "Expecting property name enclosed in double quotes", s, end))
    end += 1
    while True:
        key, end = scanstring(s, end, encoding, strict)

        # To skip some function call overhead we optimize the fast paths where
        # the JSON key separator is ": " or just ":".
        if s[end:end + 1] != ':':
            end = _w(s, end).end()
            if s[end:end + 1] != ':':
                raise ValueError(errmsg("Expecting ':' delimiter", s, end))
        end += 1

        try:
            if s[end] in _ws:
                end += 1
                if s[end] in _ws:
                    end = _w(s, end + 1).end()
        except IndexError:
            pass

        try:
            value, end = scan_once(s, end)
        except StopIteration:
            raise ValueError(errmsg("Expecting object", s, end))
        pairs_append((key, value))

        try:
            nextchar = s[end]
            if nextchar in _ws:
                end = _w(s, end + 1).end()
                nextchar = s[end]
        except IndexError:
            nextchar = ''
        end += 1

        if nextchar == '}':
            break
        elif nextchar != ',':
            raise ValueError(errmsg("Expecting ',' delimiter", s, end - 1))

        try:
            nextchar = s[end]
            if nextchar in _ws:
                end += 1
                nextchar = s[end]
                if nextchar in _ws:
                    end = _w(s, end + 1).end()
                    nextchar = s[end]
        except IndexError:
            nextchar = ''

        end += 1
        if nextchar != '"':
            raise ValueError(errmsg(
                "Expecting property name enclosed in double quotes", s, end - 1))
    if object_pairs_hook is not None:
        result = object_pairs_hook(pairs)
        return result, end
    pairs = dict(pairs)
    if object_hook is not None:
        pairs = object_hook(pairs)
    return pairs, end

def JSONArray(s_and_end, scan_once, _w=WHITESPACE.match, _ws=WHITESPACE_STR):
    s, end = s_and_end
    values = []
    nextchar = s[end:end + 1]
    if nextchar in _ws:
        end = _w(s, end + 1).end()
        nextchar = s[end:end + 1]
    # Look-ahead for trivial empty array
    if nextchar == ']':
        return values, end + 1
    _append = values.append
    while True:
        try:
            value, end = scan_once(s, end)
        except StopIteration:
            raise ValueError(errmsg("Expecting object", s, end))
        _append(value)
        nextchar = s[end:end + 1]
        if nextchar in _ws:
            end = _w(s, end + 1).end()
            nextchar = s[end:end + 1]
        end += 1
        if nextchar == ']':
            break
        elif nextchar != ',':
            raise ValueError(errmsg("Expecting ',' delimiter", s, end))
        try:
            if s[end] in _ws:
                end += 1
                if s[end] in _ws:
                    end = _w(s, end + 1).end()
        except IndexError:
            pass

    return values, end

class JSONDecoder(object):
    """Simple JSON <http://json.org> decoder

    Performs the following translations in decoding by default:

    +---------------+-------------------+
    | JSON          | Python            |
    +===============+===================+
    | object        | dict              |
    +---------------+-------------------+
    | array         | list              |
    +---------------+-------------------+
    | string        | unicode           |
    +---------------+-------------------+
    | number (int)  | int, long         |
    +---------------+-------------------+
    | number (real) | float             |
    +---------------+-------------------+
    | true          | True              |
    +---------------+-------------------+
    | false         | False             |
    +---------------+-------------------+
    | null          | None              |
    +---------------+-------------------+

    It also understands ``NaN``, ``Infinity``, and ``-Infinity`` as
    their corresponding ``float`` values, which is outside the JSON spec.

    """

    def __init__(self, encoding=None, object_hook=None, parse_float=None,
            parse_int=None, parse_constant=None, strict=True,
            object_pairs_hook=None):
        """``encoding`` determines the encoding used to interpret any ``str``
        objects decoded by this instance (utf-8 by default).  It has no
        effect when decoding ``unicode`` objects.

        Note that currently only encodings that are a superset of ASCII work,
        strings of other encodings should be passed in as ``unicode``.

        ``object_hook``, if specified, will be called with the result
        of every JSON object decoded and its return value will be used in
        place of the given ``dict``.  This can be used to provide custom
        deserializations (e.g. to support JSON-RPC class hinting).

        ``object_pairs_hook``, if specified will be called with the result of
        every JSON object decoded with an ordered list of pairs.  The return
        value of ``object_pairs_hook`` will be used instead of the ``dict``.
        This feature can be used to implement custom decoders that rely on the
        order that the key and value pairs are decoded (for example,
        collections.OrderedDict will remember the order of insertion). If
        ``object_hook`` is also defined, the ``object_pairs_hook`` takes
        priority.

        ``parse_float``, if specified, will be called with the string
        of every JSON float to be decoded. By default this is equivalent to
        float(num_str). This can be used to use another datatype or parser
        for JSON floats (e.g. decimal.Decimal).

        ``parse_int``, if specified, will be called with the string
        of every JSON int to be decoded. By default this is equivalent to
        int(num_str). This can be used to use another datatype or parser
        for JSON integers (e.g. float).

        ``parse_constant``, if specified, will be called with one of the
        following strings: -Infinity, Infinity, NaN.
        This can be used to raise an exception if invalid JSON numbers
        are encountered.

        If ``strict`` is false (true is the default), then control
        characters will be allowed inside strings.  Control characters in
        this context are those with character codes in the 0-31 range,
        including ``'\\t'`` (tab), ``'\\n'``, ``'\\r'`` and ``'\\0'``.

        """
        self.encoding = encoding
        self.object_hook = object_hook
        self.object_pairs_hook = object_pairs_hook
        self.parse_float = parse_float or float
        self.parse_int = parse_int or int
        self.parse_constant = parse_constant or _CONSTANTS.__getitem__
        self.strict = strict
        self.parse_object = JSONObject
        self.parse_array = JSONArray
        self.parse_string = scanstring
        self.scan_once = scanner.make_scanner(self)

    def decode(self, s, _w=WHITESPACE.match):
        """Return the Python representation of ``s`` (a ``str`` or ``unicode``
        instance containing a JSON document)

        """
        obj, end = self.raw_decode(s, idx=_w(s, 0).end())
        end = _w(s, end).end()
        if end != len(s):
            raise ValueError(errmsg("Extra data", s, end, len(s)))
        return obj

    def raw_decode(self, s, idx=0):
        """Decode a JSON document from ``s`` (a ``str`` or ``unicode``
        beginning with a JSON document) and return a 2-tuple of the Python
        representation and the index in ``s`` where the document ended.

        This can be used to decode a JSON document from a string that may
        have extraneous data at the end.

        """
        try:
            obj, end = self.scan_once(s, idx)
        except StopIteration:
            raise ValueError("No JSON object could be decoded")
        return obj, end
#
# Secret Labs' Regular Expression Engine
#
# convert re-style regular expression to sre pattern
#
# Copyright (c) 1998-2001 by Secret Labs AB.  All rights reserved.
#
# See the sre.py file for information on usage and redistribution.
#

"""Internal support module for sre"""

# XXX: show string offset and offending character for all errors

import sys

from sre_constants import *

SPECIAL_CHARS = ".\\[{()*+?^$|"
REPEAT_CHARS = "*+?{"

DIGITS = set("0123456789")

OCTDIGITS = set("01234567")
HEXDIGITS = set("0123456789abcdefABCDEF")
ASCIILETTERS = set("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ")

WHITESPACE = set(" \t\n\r\v\f")

ESCAPES = {
    r"\a": (LITERAL, ord("\a")),
    r"\b": (LITERAL, ord("\b")),
    r"\f": (LITERAL, ord("\f")),
    r"\n": (LITERAL, ord("\n")),
    r"\r": (LITERAL, ord("\r")),
    r"\t": (LITERAL, ord("\t")),
    r"\v": (LITERAL, ord("\v")),
    r"\\": (LITERAL, ord("\\"))
}

CATEGORIES = {
    r"\A": (AT, AT_BEGINNING_STRING), # start of string
    r"\b": (AT, AT_BOUNDARY),
    r"\B": (AT, AT_NON_BOUNDARY),
    r"\d": (IN, [(CATEGORY, CATEGORY_DIGIT)]),
    r"\D": (IN, [(CATEGORY, CATEGORY_NOT_DIGIT)]),
    r"\s": (IN, [(CATEGORY, CATEGORY_SPACE)]),
    r"\S": (IN, [(CATEGORY, CATEGORY_NOT_SPACE)]),
    r"\w": (IN, [(CATEGORY, CATEGORY_WORD)]),
    r"\W": (IN, [(CATEGORY, CATEGORY_NOT_WORD)]),
    r"\Z": (AT, AT_END_STRING), # end of string
}

FLAGS = {
    # standard flags
    "i": SRE_FLAG_IGNORECASE,
    "L": SRE_FLAG_LOCALE,
    "m": SRE_FLAG_MULTILINE,
    "s": SRE_FLAG_DOTALL,
    "x": SRE_FLAG_VERBOSE,
    # extensions
    "t": SRE_FLAG_TEMPLATE,
    "u": SRE_FLAG_UNICODE,
}

class Pattern:
    # master pattern object.  keeps track of global attributes
    def __init__(self):
        self.flags = 0
        self.open = []
        self.groups = 1
        self.groupdict = {}
        self.lookbehind = 0

    def opengroup(self, name=None):
        gid = self.groups
        self.groups = gid + 1
        if name is not None:
            ogid = self.groupdict.get(name, None)
            if ogid is not None:
                raise error, ("redefinition of group name %s as group %d; "
                              "was group %d" % (repr(name), gid,  ogid))
            self.groupdict[name] = gid
        self.open.append(gid)
        return gid
    def closegroup(self, gid):
        self.open.remove(gid)
    def checkgroup(self, gid):
        return gid < self.groups and gid not in self.open

class SubPattern:
    # a subpattern, in intermediate form
    def __init__(self, pattern, data=None):
        self.pattern = pattern
        if data is None:
            data = []
        self.data = data
        self.width = None
    def dump(self, level=0):
        seqtypes = (tuple, list)
        for op, av in self.data:
            print level*"  " + op,
            if op == IN:
                # member sublanguage
                print
                for op, a in av:
                    print (level+1)*"  " + op, a
            elif op == BRANCH:
                print
                for i, a in enumerate(av[1]):
                    if i:
                        print level*"  " + "or"
                    a.dump(level+1)
            elif op == GROUPREF_EXISTS:
                condgroup, item_yes, item_no = av
                print condgroup
                item_yes.dump(level+1)
                if item_no:
                    print level*"  " + "else"
                    item_no.dump(level+1)
            elif isinstance(av, seqtypes):
                nl = 0
                for a in av:
                    if isinstance(a, SubPattern):
                        if not nl:
                            print
                        a.dump(level+1)
                        nl = 1
                    else:
                        print a,
                        nl = 0
                if not nl:
                    print
            else:
                print av
    def __repr__(self):
        return repr(self.data)
    def __len__(self):
        return len(self.data)
    def __delitem__(self, index):
        del self.data[index]
    def __getitem__(self, index):
        if isinstance(index, slice):
            return SubPattern(self.pattern, self.data[index])
        return self.data[index]
    def __setitem__(self, index, code):
        self.data[index] = code
    def insert(self, index, code):
        self.data.insert(index, code)
    def append(self, code):
        self.data.append(code)
    def getwidth(self):
        # determine the width (min, max) for this subpattern
        if self.width:
            return self.width
        lo = hi = 0
        UNITCODES = (ANY, RANGE, IN, LITERAL, NOT_LITERAL, CATEGORY)
        REPEATCODES = (MIN_REPEAT, MAX_REPEAT)
        for op, av in self.data:
            if op is BRANCH:
                i = MAXREPEAT - 1
                j = 0
                for av in av[1]:
                    l, h = av.getwidth()
                    i = min(i, l)
                    j = max(j, h)
                lo = lo + i
                hi = hi + j
            elif op is CALL:
                i, j = av.getwidth()
                lo = lo + i
                hi = hi + j
            elif op is SUBPATTERN:
                i, j = av[1].getwidth()
                lo = lo + i
                hi = hi + j
            elif op in REPEATCODES:
                i, j = av[2].getwidth()
                lo = lo + i * av[0]
                hi = hi + j * av[1]
            elif op in UNITCODES:
                lo = lo + 1
                hi = hi + 1
            elif op == SUCCESS:
                break
        self.width = min(lo, MAXREPEAT - 1), min(hi, MAXREPEAT)
        return self.width

class Tokenizer:
    def __init__(self, string):
        self.string = string
        self.index = 0
        self.__next()
    def __next(self):
        if self.index >= len(self.string):
            self.next = None
            return
        char = self.string[self.index]
        if char[0] == "\\":
            try:
                c = self.string[self.index + 1]
            except IndexError:
                raise error, "bogus escape (end of line)"
            char = char + c
        self.index = self.index + len(char)
        self.next = char
    def match(self, char, skip=1):
        if char == self.next:
            if skip:
                self.__next()
            return 1
        return 0
    def get(self):
        this = self.next
        self.__next()
        return this
    def tell(self):
        return self.index, self.next
    def seek(self, index):
        self.index, self.next = index

def isident(char):
    return "a" <= char <= "z" or "A" <= char <= "Z" or char == "_"

def isdigit(char):
    return "0" <= char <= "9"

def isname(name):
    # check that group name is a valid string
    if not isident(name[0]):
        return False
    for char in name[1:]:
        if not isident(char) and not isdigit(char):
            return False
    return True

def _class_escape(source, escape, nested):
    # handle escape code inside character class
    code = ESCAPES.get(escape)
    if code:
        return code
    code = CATEGORIES.get(escape)
    if code and code[0] == IN:
        return code
    try:
        c = escape[1:2]
        if c == "x":
            # hexadecimal escape (exactly two digits)
            while source.next in HEXDIGITS and len(escape) < 4:
                escape = escape + source.get()
            escape = escape[2:]
            if len(escape) != 2:
                raise error, "bogus escape: %s" % repr("\\" + escape)
            return LITERAL, int(escape, 16) & 0xff
        elif c in OCTDIGITS:
            # octal escape (up to three digits)
            while source.next in OCTDIGITS and len(escape) < 4:
                escape = escape + source.get()
            escape = escape[1:]
            return LITERAL, int(escape, 8) & 0xff
        elif c in DIGITS:
            raise error, "bogus escape: %s" % repr(escape)
        if len(escape) == 2:
            if sys.py3kwarning and c in ASCIILETTERS:
                import warnings
                if c in 'Uu':
                    warnings.warn('bad escape %s; Unicode escapes are '
                                  'supported only since Python 3.3' % escape,
                                  FutureWarning, stacklevel=nested + 6)
                else:
                    warnings.warnpy3k('bad escape %s' % escape,
                                      DeprecationWarning, stacklevel=nested + 6)
            return LITERAL, ord(escape[1])
    except ValueError:
        pass
    raise error, "bogus escape: %s" % repr(escape)

def _escape(source, escape, state, nested):
    # handle escape code in expression
    code = CATEGORIES.get(escape)
    if code:
        return code
    code = ESCAPES.get(escape)
    if code:
        return code
    try:
        c = escape[1:2]
        if c == "x":
            # hexadecimal escape
            while source.next in HEXDIGITS and len(escape) < 4:
                escape = escape + source.get()
            if len(escape) != 4:
                raise ValueError
            return LITERAL, int(escape[2:], 16) & 0xff
        elif c == "0":
            # octal escape
            while source.next in OCTDIGITS and len(escape) < 4:
                escape = escape + source.get()
            return LITERAL, int(escape[1:], 8) & 0xff
        elif c in DIGITS:
            # octal escape *or* decimal group reference (sigh)
            if source.next in DIGITS:
                escape = escape + source.get()
                if (escape[1] in OCTDIGITS and escape[2] in OCTDIGITS and
                    source.next in OCTDIGITS):
                    # got three octal digits; this is an octal escape
                    escape = escape + source.get()
                    return LITERAL, int(escape[1:], 8) & 0xff
            # not an octal escape, so this is a group reference
            group = int(escape[1:])
            if group < state.groups:
                if not state.checkgroup(group):
                    raise error, "cannot refer to open group"
                if state.lookbehind:
                    import warnings
                    warnings.warn('group references in lookbehind '
                                  'assertions are not supported',
                                  RuntimeWarning, stacklevel=nested + 6)
                return GROUPREF, group
            raise ValueError
        if len(escape) == 2:
            if sys.py3kwarning and c in ASCIILETTERS:
                import warnings
                if c in 'Uu':
                    warnings.warn('bad escape %s; Unicode escapes are '
                                  'supported only since Python 3.3' % escape,
                                  FutureWarning, stacklevel=nested + 6)
                else:
                    warnings.warnpy3k('bad escape %s' % escape,
                                      DeprecationWarning, stacklevel=nested + 6)
            return LITERAL, ord(escape[1])
    except ValueError:
        pass
    raise error, "bogus escape: %s" % repr(escape)

def _parse_sub(source, state, nested):
    # parse an alternation: a|b|c

    items = []
    itemsappend = items.append
    sourcematch = source.match
    while 1:
        itemsappend(_parse(source, state, nested + 1))
        if sourcematch("|"):
            continue
        if not nested:
            break
        if not source.next or sourcematch(")", 0):
            break
        else:
            raise error, "pattern not properly closed"

    if len(items) == 1:
        return items[0]

    subpattern = SubPattern(state)
    subpatternappend = subpattern.append

    # check if all items share a common prefix
    while 1:
        prefix = None
        for item in items:
            if not item:
                break
            if prefix is None:
                prefix = item[0]
            elif item[0] != prefix:
                break
        else:
            # all subitems start with a common "prefix".
            # move it out of the branch
            for item in items:
                del item[0]
            subpatternappend(prefix)
            continue # check next one
        break

    # check if the branch can be replaced by a character set
    for item in items:
        if len(item) != 1 or item[0][0] != LITERAL:
            break
    else:
        # we can store this as a character set instead of a
        # branch (the compiler may optimize this even more)
        set = []
        setappend = set.append
        for item in items:
            setappend(item[0])
        subpatternappend((IN, set))
        return subpattern

    subpattern.append((BRANCH, (None, items)))
    return subpattern

def _parse_sub_cond(source, state, condgroup, nested):
    item_yes = _parse(source, state, nested + 1)
    if source.match("|"):
        item_no = _parse(source, state, nested + 1)
        if source.match("|"):
            raise error, "conditional backref with more than two branches"
    else:
        item_no = None
    if source.next and not source.match(")", 0):
        raise error, "pattern not properly closed"
    subpattern = SubPattern(state)
    subpattern.append((GROUPREF_EXISTS, (condgroup, item_yes, item_no)))
    return subpattern

_PATTERNENDERS = set("|)")
_ASSERTCHARS = set("=!<")
_LOOKBEHINDASSERTCHARS = set("=!")
_REPEATCODES = set([MIN_REPEAT, MAX_REPEAT])

def _parse(source, state, nested):
    # parse a simple pattern
    subpattern = SubPattern(state)

    # precompute constants into local variables
    subpatternappend = subpattern.append
    sourceget = source.get
    sourcematch = source.match
    _len = len
    PATTERNENDERS = _PATTERNENDERS
    ASSERTCHARS = _ASSERTCHARS
    LOOKBEHINDASSERTCHARS = _LOOKBEHINDASSERTCHARS
    REPEATCODES = _REPEATCODES

    while 1:

        if source.next in PATTERNENDERS:
            break # end of subpattern
        this = sourceget()
        if this is None:
            break # end of pattern

        if state.flags & SRE_FLAG_VERBOSE:
            # skip whitespace and comments
            if this in WHITESPACE:
                continue
            if this == "#":
                while 1:
                    this = sourceget()
                    if this in (None, "\n"):
                        break
                continue

        if this and this[0] not in SPECIAL_CHARS:
            subpatternappend((LITERAL, ord(this)))

        elif this == "[":
            # character set
            set = []
            setappend = set.append
##          if sourcematch(":"):
##              pass # handle character classes
            if sourcematch("^"):
                setappend((NEGATE, None))
            # check remaining characters
            start = set[:]
            while 1:
                this = sourceget()
                if this == "]" and set != start:
                    break
                elif this and this[0] == "\\":
                    code1 = _class_escape(source, this, nested + 1)
                elif this:
                    code1 = LITERAL, ord(this)
                else:
                    raise error, "unexpected end of regular expression"
                if sourcematch("-"):
                    # potential range
                    this = sourceget()
                    if this == "]":
                        if code1[0] is IN:
                            code1 = code1[1][0]
                        setappend(code1)
                        setappend((LITERAL, ord("-")))
                        break
                    elif this:
                        if this[0] == "\\":
                            code2 = _class_escape(source, this, nested + 1)
                        else:
                            code2 = LITERAL, ord(this)
                        if code1[0] != LITERAL or code2[0] != LITERAL:
                            raise error, "bad character range"
                        lo = code1[1]
                        hi = code2[1]
                        if hi < lo:
                            raise error, "bad character range"
                        setappend((RANGE, (lo, hi)))
                    else:
                        raise error, "unexpected end of regular expression"
                else:
                    if code1[0] is IN:
                        code1 = code1[1][0]
                    setappend(code1)

            # XXX: <fl> should move set optimization to compiler!
            if _len(set)==1 and set[0][0] is LITERAL:
                subpatternappend(set[0]) # optimization
            elif _len(set)==2 and set[0][0] is NEGATE and set[1][0] is LITERAL:
                subpatternappend((NOT_LITERAL, set[1][1])) # optimization
            else:
                # XXX: <fl> should add charmap optimization here
                subpatternappend((IN, set))

        elif this and this[0] in REPEAT_CHARS:
            # repeat previous item
            if this == "?":
                min, max = 0, 1
            elif this == "*":
                min, max = 0, MAXREPEAT

            elif this == "+":
                min, max = 1, MAXREPEAT
            elif this == "{":
                if source.next == "}":
                    subpatternappend((LITERAL, ord(this)))
                    continue
                here = source.tell()
                min, max = 0, MAXREPEAT
                lo = hi = ""
                while source.next in DIGITS:
                    lo = lo + source.get()
                if sourcematch(","):
                    while source.next in DIGITS:
                        hi = hi + sourceget()
                else:
                    hi = lo
                if not sourcematch("}"):
                    subpatternappend((LITERAL, ord(this)))
                    source.seek(here)
                    continue
                if lo:
                    min = int(lo)
                    if min >= MAXREPEAT:
                        raise OverflowError("the repetition number is too large")
                if hi:
                    max = int(hi)
                    if max >= MAXREPEAT:
                        raise OverflowError("the repetition number is too large")
                    if max < min:
                        raise error("bad repeat interval")
            else:
                raise error, "not supported"
            # figure out which item to repeat
            if subpattern:
                item = subpattern[-1:]
            else:
                item = None
            if not item or (_len(item) == 1 and item[0][0] == AT):
                raise error, "nothing to repeat"
            if item[0][0] in REPEATCODES:
                raise error, "multiple repeat"
            if sourcematch("?"):
                subpattern[-1] = (MIN_REPEAT, (min, max, item))
            else:
                subpattern[-1] = (MAX_REPEAT, (min, max, item))

        elif this == ".":
            subpatternappend((ANY, None))

        elif this == "(":
            group = 1
            name = None
            condgroup = None
            if sourcematch("?"):
                group = 0
                # options
                if sourcematch("P"):
                    # python extensions
                    if sourcematch("<"):
                        # named group: skip forward to end of name
                        name = ""
                        while 1:
                            char = sourceget()
                            if char is None:
                                raise error, "unterminated name"
                            if char == ">":
                                break
                            name = name + char
                        group = 1
                        if not name:
                            raise error("missing group name")
                        if not isname(name):
                            raise error("bad character in group name %r" %
                                        name)
                    elif sourcematch("="):
                        # named backreference
                        name = ""
                        while 1:
                            char = sourceget()
                            if char is None:
                                raise error, "unterminated name"
                            if char == ")":
                                break
                            name = name + char
                        if not name:
                            raise error("missing group name")
                        if not isname(name):
                            raise error("bad character in backref group name "
                                        "%r" % name)
                        gid = state.groupdict.get(name)
                        if gid is None:
                            msg = "unknown group name: {0!r}".format(name)
                            raise error(msg)
                        if state.lookbehind:
                            import warnings
                            warnings.warn('group references in lookbehind '
                                          'assertions are not supported',
                                          RuntimeWarning, stacklevel=nested + 6)
                        subpatternappend((GROUPREF, gid))
                        continue
                    else:
                        char = sourceget()
                        if char is None:
                            raise error, "unexpected end of pattern"
                        raise error, "unknown specifier: ?P%s" % char
                elif sourcematch(":"):
                    # non-capturing group
                    group = 2
                elif sourcematch("#"):
                    # comment
                    while 1:
                        if source.next is None or source.next == ")":
                            break
                        sourceget()
                    if not sourcematch(")"):
                        raise error, "unbalanced parenthesis"
                    continue
                elif source.next in ASSERTCHARS:
                    # lookahead assertions
                    char = sourceget()
                    dir = 1
                    if char == "<":
                        if source.next not in LOOKBEHINDASSERTCHARS:
                            raise error, "syntax error"
                        dir = -1 # lookbehind
                        char = sourceget()
                        state.lookbehind += 1
                    p = _parse_sub(source, state, nested + 1)
                    if dir < 0:
                        state.lookbehind -= 1
                    if not sourcematch(")"):
                        raise error, "unbalanced parenthesis"
                    if char == "=":
                        subpatternappend((ASSERT, (dir, p)))
                    else:
                        subpatternappend((ASSERT_NOT, (dir, p)))
                    continue
                elif sourcematch("("):
                    # conditional backreference group
                    condname = ""
                    while 1:
                        char = sourceget()
                        if char is None:
                            raise error, "unterminated name"
                        if char == ")":
                            break
                        condname = condname + char
                    group = 2
                    if not condname:
                        raise error("missing group name")
                    if isname(condname):
                        condgroup = state.groupdict.get(condname)
                        if condgroup is None:
                            msg = "unknown group name: {0!r}".format(condname)
                            raise error(msg)
                    else:
                        try:
                            condgroup = int(condname)
                        except ValueError:
                            raise error, "bad character in group name"
                    if state.lookbehind:
                        import warnings
                        warnings.warn('group references in lookbehind '
                                      'assertions are not supported',
                                      RuntimeWarning, stacklevel=nested + 6)
                else:
                    # flags
                    if not source.next in FLAGS:
                        raise error, "unexpected end of pattern"
                    while source.next in FLAGS:
                        state.flags = state.flags | FLAGS[sourceget()]
            if group:
                # parse group contents
                if group == 2:
                    # anonymous group
                    group = None
                else:
                    group = state.opengroup(name)
                if condgroup:
                    p = _parse_sub_cond(source, state, condgroup, nested + 1)
                else:
                    p = _parse_sub(source, state, nested + 1)
                if not sourcematch(")"):
                    raise error, "unbalanced parenthesis"
                if group is not None:
                    state.closegroup(group)
                subpatternappend((SUBPATTERN, (group, p)))
            else:
                while 1:
                    char = sourceget()
                    if char is None:
                        raise error, "unexpected end of pattern"
                    if char == ")":
                        break
                    raise error, "unknown extension"

        elif this == "^":
            subpatternappend((AT, AT_BEGINNING))

        elif this == "$":
            subpattern.append((AT, AT_END))

        elif this and this[0] == "\\":
            code = _escape(source, this, state, nested + 1)
            subpatternappend(code)

        else:
            raise error, "parser error"

    return subpattern

def parse(str, flags=0, pattern=None):
    # parse 're' pattern into list of (opcode, argument) tuples

    source = Tokenizer(str)

    if pattern is None:
        pattern = Pattern()
    pattern.flags = flags
    pattern.str = str

    p = _parse_sub(source, pattern, 0)
    if (sys.py3kwarning and
        (p.pattern.flags & SRE_FLAG_LOCALE) and
        (p.pattern.flags & SRE_FLAG_UNICODE)):
        import warnings
        warnings.warnpy3k("LOCALE and UNICODE flags are incompatible",
                          DeprecationWarning, stacklevel=5)

    tail = source.get()
    if tail == ")":
        raise error, "unbalanced parenthesis"
    elif tail:
        raise error, "bogus characters at end of regular expression"

    if not (flags & SRE_FLAG_VERBOSE) and p.pattern.flags & SRE_FLAG_VERBOSE:
        # the VERBOSE flag was switched on inside the pattern.  to be
        # on the safe side, we'll parse the whole thing again...
        return parse(str, p.pattern.flags)

    if flags & SRE_FLAG_DEBUG:
        p.dump()

    return p

def parse_template(source, pattern):
    # parse 're' replacement string into list of literals and
    # group references
    s = Tokenizer(source)
    sget = s.get
    p = []
    a = p.append
    def literal(literal, p=p, pappend=a):
        if p and p[-1][0] is LITERAL:
            p[-1] = LITERAL, p[-1][1] + literal
        else:
            pappend((LITERAL, literal))
    sep = source[:0]
    if type(sep) is type(""):
        makechar = chr
    else:
        makechar = unichr
    while 1:
        this = sget()
        if this is None:
            break # end of replacement string
        if this and this[0] == "\\":
            # group
            c = this[1:2]
            if c == "g":
                name = ""
                if s.match("<"):
                    while 1:
                        char = sget()
                        if char is None:
                            raise error, "unterminated group name"
                        if char == ">":
                            break
                        name = name + char
                if not name:
                    raise error, "missing group name"
                try:
                    index = int(name)
                    if index < 0:
                        raise error, "negative group number"
                except ValueError:
                    if not isname(name):
                        raise error, "bad character in group name"
                    try:
                        index = pattern.groupindex[name]
                    except KeyError:
                        msg = "unknown group name: {0!r}".format(name)
                        raise IndexError(msg)
                a((MARK, index))
            elif c == "0":
                if s.next in OCTDIGITS:
                    this = this + sget()
                    if s.next in OCTDIGITS:
                        this = this + sget()
                literal(makechar(int(this[1:], 8) & 0xff))
            elif c in DIGITS:
                isoctal = False
                if s.next in DIGITS:
                    this = this + sget()
                    if (c in OCTDIGITS and this[2] in OCTDIGITS and
                        s.next in OCTDIGITS):
                        this = this + sget()
                        isoctal = True
                        literal(makechar(int(this[1:], 8) & 0xff))
                if not isoctal:
                    a((MARK, int(this[1:])))
            else:
                try:
                    this = makechar(ESCAPES[this][1])
                except KeyError:
                    if sys.py3kwarning and c in ASCIILETTERS:
                        import warnings
                        warnings.warnpy3k('bad escape %s' % this,
                                          DeprecationWarning, stacklevel=4)
                literal(this)
        else:
            literal(this)
    # convert template to groups and literals lists
    i = 0
    groups = []
    groupsappend = groups.append
    literals = [None] * len(p)
    for c, s in p:
        if c is MARK:
            groupsappend((i, s))
            # literal[i] is already None
        else:
            literals[i] = s
        i = i + 1
    return groups, literals

def expand_template(template, match):
    g = match.group
    sep = match.string[:0]
    groups, literals = template
    literals = literals[:]
    try:
        for index, group in groups:
            literals[index] = s = g(group)
            if s is None:
                raise error, "unmatched group"
    except IndexError:
        raise error, "invalid group reference"
    return sep.join(literals)
�
zfc@sdZyddlmZmZWn'ek
rIddlmZmZnXyddlmZWn!ek
r�ddlmZnXddlZddddgZ	d	ej
fd
��YZdej
fd��YZdefd��YZ
defd
��YZdded�ZdS(s�
Manage shelves of pickled objects.

A "shelf" is a persistent, dictionary-like object.  The difference
with dbm databases is that the values (not the keys!) in a shelf can
be essentially arbitrary Python objects -- anything that the "pickle"
module can handle.  This includes most class instances, recursive data
types, and objects containing lots of shared sub-objects.  The keys
are ordinary strings.

To summarize the interface (key is a string, data is an arbitrary
object):

        import shelve
        d = shelve.open(filename) # open, with (g)dbm filename -- no suffix

        d[key] = data   # store data at key (overwrites old data if
                        # using an existing key)
        data = d[key]   # retrieve a COPY of the data at key (raise
                        # KeyError if no such key) -- NOTE that this
                        # access returns a *copy* of the entry!
        del d[key]      # delete data stored at key (raises KeyError
                        # if no such key)
        flag = d.has_key(key)   # true if the key exists; same as "key in d"
        list = d.keys() # a list of all existing keys (slow!)

        d.close()       # close it

Dependent on the implementation, closing a persistent dictionary may
or may not be necessary to flush changes to disk.

Normally, d[key] returns a COPY of the entry.  This needs care when
mutable entries are mutated: for example, if d[key] is a list,
        d[key].append(anitem)
does NOT modify the entry d[key] itself, as stored in the persistent
mapping -- it only modifies the copy, which is then immediately
discarded, so that the append has NO effect whatsoever.  To append an
item to d[key] in a way that will affect the persistent mapping, use:
        data = d[key]
        data.append(anitem)
        d[key] = data

To avoid the problem with mutable entries, you may pass the keyword
argument writeback=True in the call to shelve.open.  When you use:
        d = shelve.open(filename, writeback=True)
then d keeps a cache of all entries you access, and writes them all back
to the persistent mapping when you call d.close().  This ensures that
such usage as d[key].append(anitem) works as intended.

However, using keyword argument writeback=True may consume vast amount
of memory for the cache, and it may make d.close() very slow, if you
access many of d's entries after opening it in this way: d has no way to
check which of the entries you access are mutable and/or which ones you
actually mutate, so it must cache, and write back at close, all of the
entries that you access.  You can call d.sync() to write back all the
entries in the cache, and empty the cache (d.sync() also synchronizes
the persistent dictionary on disk, if feasible).
i����(tPicklert	Unpickler(tStringIONtShelft
BsdDbShelftDbfilenameShelftopent_ClosedDictcBs2eZdZd�ZeZZZZd�ZRS(s>Marker for a closed dict.  Access attempts raise a ValueError.cGstd��dS(Ns!invalid operation on closed shelf(t
ValueError(tselftargs((s/usr/lib64/python2.7/shelve.pytclosedNscCsdS(Ns<Closed Dictionary>((R	((s/usr/lib64/python2.7/shelve.pyt__repr__Rs(	t__name__t
__module__t__doc__Rt__getitem__t__setitem__t__delitem__tkeysR(((s/usr/lib64/python2.7/shelve.pyRKs	cBs�eZdZd
ed�Zd�Zd�Zd�Zd�Z	d
d�Z
d�Zd�Zd	�Z
d
�Zd�Zd�ZRS(s�Base class for shelf implementations.

    This is initialized with a dictionary-like object.
    See the module's __doc__ string for an overview of the interface.
    cCs=||_|dkrd}n||_||_i|_dS(Ni(tdicttNonet	_protocolt	writebacktcache(R	RtprotocolR((s/usr/lib64/python2.7/shelve.pyt__init__\s				cCs
|jj�S(N(RR(R	((s/usr/lib64/python2.7/shelve.pyRdscCs
t|j�S(N(tlenR(R	((s/usr/lib64/python2.7/shelve.pyt__len__gscCs
||jkS(N(R(R	tkey((s/usr/lib64/python2.7/shelve.pythas_keyjscCs
||jkS(N(R(R	R((s/usr/lib64/python2.7/shelve.pyt__contains__mscCs||jkr||S|S(N(R(R	Rtdefault((s/usr/lib64/python2.7/shelve.pytgetpscCsgy|j|}WnOtk
rbt|j|�}t|�j�}|jrc||j|<qcnX|S(N(RtKeyErrorRRRtloadR(R	Rtvaluetf((s/usr/lib64/python2.7/shelve.pyRus
	cCsX|jr||j|<nt�}t||j�}|j|�|j�|j|<dS(N(RRRRRtdumptgetvalueR(R	RR$R%tp((s/usr/lib64/python2.7/shelve.pyRs		
cCs0|j|=y|j|=Wntk
r+nXdS(N(RRR"(R	R((s/usr/lib64/python2.7/shelve.pyR�s


cCsq|jdkrdSz3|j�y|jj�Wntk
rDnXWdyt�|_Wnd|_nXXdS(N(RRtsynctclosetAttributeErrorR(R	((s/usr/lib64/python2.7/shelve.pyR*�s

cCs!t|d�sdS|j�dS(NR(thasattrR*(R	((s/usr/lib64/python2.7/shelve.pyt__del__�scCs�|jrZ|jrZt|_x'|jj�D]\}}|||<q+Wt|_i|_nt|jd�r||jj�ndS(NR)(RRtFalset	iteritemstTrueR,RR)(R	Rtentry((s/usr/lib64/python2.7/shelve.pyR)�s		N(R
RRRR.RRRRRR!RRRR*R-R)(((s/usr/lib64/python2.7/shelve.pyRUs					
				cBsJeZdZded�Zd�Zd�Zd�Zd�Z	d�Z
RS(s�Shelf implementation using the "BSD" db interface.

    This adds methods first(), next(), previous(), last() and
    set_location() that have no counterpart in [g]dbm databases.

    The actual database must be opened using one of the "bsddb"
    modules "open" routines (i.e. bsddb.hashopen, bsddb.btopen or
    bsddb.rnopen) and passed to the constructor.

    See the module's __doc__ string for an overview of the interface.
    cCstj||||�dS(N(RR(R	RRR((s/usr/lib64/python2.7/shelve.pyR�scCs:|jj|�\}}t|�}|t|�j�fS(N(Rtset_locationRRR#(R	RR$R%((s/usr/lib64/python2.7/shelve.pyR2�scCs7|jj�\}}t|�}|t|�j�fS(N(RtnextRRR#(R	RR$R%((s/usr/lib64/python2.7/shelve.pyR3�scCs7|jj�\}}t|�}|t|�j�fS(N(RtpreviousRRR#(R	RR$R%((s/usr/lib64/python2.7/shelve.pyR4�scCs7|jj�\}}t|�}|t|�j�fS(N(RtfirstRRR#(R	RR$R%((s/usr/lib64/python2.7/shelve.pyR5�scCs7|jj�\}}t|�}|t|�j�fS(N(RtlastRRR#(R	RR$R%((s/usr/lib64/python2.7/shelve.pyR6�sN(R
RRRR.RR2R3R4R5R6(((s/usr/lib64/python2.7/shelve.pyR�s				cBs eZdZdded�ZRS(s�Shelf implementation using the "anydbm" generic dbm interface.

    This is initialized with the filename for the dbm database.
    See the module's __doc__ string for an overview of the interface.
    tccCs2ddl}tj||j||�||�dS(Ni����(tanydbmRRR(R	tfilenametflagRRR8((s/usr/lib64/python2.7/shelve.pyR�sN(R
RRRR.R(((s/usr/lib64/python2.7/shelve.pyR�sR7cCst||||�S(sOpen a persistent dictionary for reading and writing.

    The filename parameter is the base filename for the underlying
    database.  As a side-effect, an extension may be added to the
    filename and more than one file may be created.  The optional flag
    parameter has the same interpretation as the flag parameter of
    anydbm.open(). The optional protocol parameter specifies the
    version of the pickle protocol (0, 1, or 2).

    See the module's __doc__ string for an overview of the interface.
    (R(R9R:RR((s/usr/lib64/python2.7/shelve.pyR�s
(RtcPickleRRtImportErrortpicklet	cStringIORtUserDictt__all__t	DictMixinRRRRRR.R(((s/usr/lib64/python2.7/shelve.pyt<module>9s


[*�
zfc@sVdZddlZddlZddlZejdedd�dddgZyejd	�ZWne	e
fk
r�d
ZnXgZd�Zddd
��YZ
de
fd��YZejd dks�ejdkr[
[ddd�Zddd�Zddd�Zn@ddd�Zddd�Zddd�Zejddg�dS(snSpawn a command with pipes to its stdin, stdout, and optionally stderr.

The normal os.popen(cmd, mode) call spawns a shell command and provides a
file interface to just the input or output of the process depending on
whether mode is 'r' or 'w'.  This module provides the functions popen2(cmd)
and popen3(cmd) which return two or three pipes to the spawned command.
i����Ns<The popen2 module is deprecated.  Use the subprocess module.t
stacklevelitpopen2tpopen3tpopen4tSC_OPEN_MAXicCsYxRtD]I}|jdtj�dkrytj|�WqQtk
rMqQXqqWdS(Nt
_deadstatei(t_activetpolltsystmaxinttremovet
ValueError(tinst((s/usr/lib64/python2.7/popen2.pyt_cleanups
tPopen3cBsJeZdZdZedd�Zd�Zd�Zdd�Z	d�Z
RS(s{Class representing a child process.  Normally, instances are created
    internally by the functions popen2() and popen3().i����c
Cs;t�||_tj�\}}tj�\}}|rOtj�\}}	ntj�|_|jdkr�tj|d�tj|d�|r�tj|	d�n|j|�ntj|�tj	|d|�|_
tj|�tj	|d|�|_|r.tj|	�tj	|d|�|_n	d|_dS(sdThe parameter 'cmd' is the shell command to execute in a
        sub-process.  On UNIX, 'cmd' may be a sequence, in which case arguments
        will be passed directly to the program without shell intervention (as
        with os.spawnv()).  If 'cmd' is a string it will be passed to the shell
        (as with os.system()).   The 'capturestderr' flag, if true, specifies
        that the object should capture standard error output of the child
        process.  The default is false.  If the 'bufsize' parameter is
        specified, it specifies the size of the I/O buffers to/from the child
        process.iiitwtrN(R
tcmdtostpipetforktpidtdup2t
_run_childtclosetfdopenttochildt	fromchildtchilderrtNone(
tselfRt
capturestderrtbufsizetp2creadtp2cwritetc2preadtc2pwriteterroutterrin((s/usr/lib64/python2.7/popen2.pyt__init__(s*
	


cCsE|jdtj�|jdkrAtdk	rAtj|�qAndS(NRi(RRR	tstsRRtappend(R((s/usr/lib64/python2.7/popen2.pyt__del__JscCs^t|t�r!dd|g}ntjdt�ztj|d|�Wdtjd�XdS(Ns/bin/shs-ciii(t
isinstancet
basestringRt
closerangetMAXFDtexecvpt_exit(RR((s/usr/lib64/python2.7/popen2.pyRRscCs�|jdkr~y=tj|jtj�\}}||jkrK||_nWq~tjk
rz|dk	r{||_q{q~Xn|jS(shReturn the exit status of the child process if it has finished,
        or -1 if it hasn't finished yet.iN(R(RtwaitpidRtWNOHANGterrorR(RRRR(((s/usr/lib64/python2.7/popen2.pyR[scCs=|jdkr6tj|jd�\}}||_n|jS(s9Wait for and return the exit status of the child process.i(R(RR1R(RRR(((s/usr/lib64/python2.7/popen2.pytwaitisN(t__name__t
__module__t__doc__R(tFalseR'R*RRRR4(((s/usr/lib64/python2.7/popen2.pyR"s"			tPopen4cBseZdZdd�ZRS(i����cCs�t�||_tj�\}}tj�\}}tj�|_|jdkr�tj|d�tj|d�tj|d�|j|�ntj|�tj	|d|�|_
tj|�tj	|d|�|_dS(NiiiRR(R
RRRRRRRRRRR(RRR R!R"R#R$((s/usr/lib64/python2.7/popen2.pyR'ws	

N(R5R6RRR'(((s/usr/lib64/python2.7/popen2.pyR9tsitwintos2emxttcCs%tj|||�\}}||fS(s�Execute the shell command 'cmd' in a sub-process. On UNIX, 'cmd' may
        be a sequence, in which case arguments will be passed directly to the
        program without shell intervention (as with os.spawnv()). If 'cmd' is a
        string it will be passed to the shell (as with os.system()). If
        'bufsize' is specified, it sets the buffer size for the I/O pipes. The
        file objects (child_stdout, child_stdin) are returned.(RR(RR tmodeRR((s/usr/lib64/python2.7/popen2.pyR�scCs+tj|||�\}}}|||fS(s�Execute the shell command 'cmd' in a sub-process. On UNIX, 'cmd' may
        be a sequence, in which case arguments will be passed directly to the
        program without shell intervention (as with os.spawnv()). If 'cmd' is a
        string it will be passed to the shell (as with os.system()). If
        'bufsize' is specified, it sets the buffer size for the I/O pipes. The
        file objects (child_stdout, child_stdin, child_stderr) are returned.(RR(RR R=RRte((s/usr/lib64/python2.7/popen2.pyR�scCs%tj|||�\}}||fS(s�Execute the shell command 'cmd' in a sub-process. On UNIX, 'cmd' may
        be a sequence, in which case arguments will be passed directly to the
        program without shell intervention (as with os.spawnv()). If 'cmd' is a
        string it will be passed to the shell (as with os.system()). If
        'bufsize' is specified, it sets the buffer size for the I/O pipes. The
        file objects (child_stdout_stderr, child_stdin) are returned.(RR(RR R=RR((s/usr/lib64/python2.7/popen2.pyR�scCs"t|t|�}|j|jfS(s�Execute the shell command 'cmd' in a sub-process. On UNIX, 'cmd' may
        be a sequence, in which case arguments will be passed directly to the
        program without shell intervention (as with os.spawnv()). If 'cmd' is a
        string it will be passed to the shell (as with os.system()). If
        'bufsize' is specified, it sets the buffer size for the I/O pipes. The
        file objects (child_stdout, child_stdin) are returned.(RR8RR(RR R=R((s/usr/lib64/python2.7/popen2.pyR�scCs(t|t|�}|j|j|jfS(s�Execute the shell command 'cmd' in a sub-process. On UNIX, 'cmd' may
        be a sequence, in which case arguments will be passed directly to the
        program without shell intervention (as with os.spawnv()). If 'cmd' is a
        string it will be passed to the shell (as with os.system()). If
        'bufsize' is specified, it sets the buffer size for the I/O pipes. The
        file objects (child_stdout, child_stdin, child_stderr) are returned.(RtTrueRRR(RR R=R((s/usr/lib64/python2.7/popen2.pyR�scCst||�}|j|jfS(s�Execute the shell command 'cmd' in a sub-process. On UNIX, 'cmd' may
        be a sequence, in which case arguments will be passed directly to the
        program without shell intervention (as with os.spawnv()). If 'cmd' is a
        string it will be passed to the shell (as with os.system()). If
        'bufsize' is specified, it sets the buffer size for the I/O pipes. The
        file objects (child_stdout_stderr, child_stdin) are returned.(R9RR(RR R=R((s/usr/lib64/python2.7/popen2.pyR�s((R7RRtwarningstwarntDeprecationWarningt__all__tsysconfR.tAttributeErrorRRR
RR9tplatformRRRtextend(((s/usr/lib64/python2.7/popen2.pyt<module>s.	

	
R"





r"""File-like objects that read from or write to a string buffer.

This implements (nearly) all stdio methods.

f = StringIO()      # ready for writing
f = StringIO(buf)   # ready for reading
f.close()           # explicitly release resources held
flag = f.isatty()   # always false
pos = f.tell()      # get current position
f.seek(pos)         # set current position
f.seek(pos, mode)   # mode 0: absolute; 1: relative; 2: relative to EOF
buf = f.read()      # read until EOF
buf = f.read(n)     # read up to n bytes
buf = f.readline()  # read until end of line ('\n') or EOF
list = f.readlines()# list of f.readline() results until EOF
f.truncate([size])  # truncate file at to at most size (default: current pos)
f.write(buf)        # write at current position
f.writelines(list)  # for line in list: f.write(line)
f.getvalue()        # return whole file's contents as a string

Notes:
- Using a real file is often faster (but less convenient).
- There's also a much faster implementation in C, called cStringIO, but
  it's not subclassable.
- fileno() is left unimplemented so that code which uses it triggers
  an exception early.
- Seeking far beyond EOF and then writing will insert real null
  bytes that occupy space in the buffer.
- There's a simple test set (see end of this file).
"""
try:
    from errno import EINVAL
except ImportError:
    EINVAL = 22

__all__ = ["StringIO"]

def _complain_ifclosed(closed):
    if closed:
        raise ValueError, "I/O operation on closed file"

class StringIO:
    """class StringIO([buffer])

    When a StringIO object is created, it can be initialized to an existing
    string by passing the string to the constructor. If no string is given,
    the StringIO will start empty.

    The StringIO object can accept either Unicode or 8-bit strings, but
    mixing the two may take some care. If both are used, 8-bit strings that
    cannot be interpreted as 7-bit ASCII (that use the 8th bit) will cause
    a UnicodeError to be raised when getvalue() is called.
    """
    def __init__(self, buf = ''):
        # Force self.buf to be a string or unicode
        if not isinstance(buf, basestring):
            buf = str(buf)
        self.buf = buf
        self.len = len(buf)
        self.buflist = []
        self.pos = 0
        self.closed = False
        self.softspace = 0

    def __iter__(self):
        return self

    def next(self):
        """A file object is its own iterator, for example iter(f) returns f
        (unless f is closed). When a file is used as an iterator, typically
        in a for loop (for example, for line in f: print line), the next()
        method is called repeatedly. This method returns the next input line,
        or raises StopIteration when EOF is hit.
        """
        _complain_ifclosed(self.closed)
        r = self.readline()
        if not r:
            raise StopIteration
        return r

    def close(self):
        """Free the memory buffer.
        """
        if not self.closed:
            self.closed = True
            del self.buf, self.pos

    def isatty(self):
        """Returns False because StringIO objects are not connected to a
        tty-like device.
        """
        _complain_ifclosed(self.closed)
        return False

    def seek(self, pos, mode = 0):
        """Set the file's current position.

        The mode argument is optional and defaults to 0 (absolute file
        positioning); other values are 1 (seek relative to the current
        position) and 2 (seek relative to the file's end).

        There is no return value.
        """
        _complain_ifclosed(self.closed)
        if self.buflist:
            self.buf += ''.join(self.buflist)
            self.buflist = []
        if mode == 1:
            pos += self.pos
        elif mode == 2:
            pos += self.len
        self.pos = max(0, pos)

    def tell(self):
        """Return the file's current position."""
        _complain_ifclosed(self.closed)
        return self.pos

    def read(self, n = -1):
        """Read at most size bytes from the file
        (less if the read hits EOF before obtaining size bytes).

        If the size argument is negative or omitted, read all data until EOF
        is reached. The bytes are returned as a string object. An empty
        string is returned when EOF is encountered immediately.
        """
        _complain_ifclosed(self.closed)
        if self.buflist:
            self.buf += ''.join(self.buflist)
            self.buflist = []
        if n is None or n < 0:
            newpos = self.len
        else:
            newpos = min(self.pos+n, self.len)
        r = self.buf[self.pos:newpos]
        self.pos = newpos
        return r

    def readline(self, length=None):
        r"""Read one entire line from the file.

        A trailing newline character is kept in the string (but may be absent
        when a file ends with an incomplete line). If the size argument is
        present and non-negative, it is a maximum byte count (including the
        trailing newline) and an incomplete line may be returned.

        An empty string is returned only when EOF is encountered immediately.

        Note: Unlike stdio's fgets(), the returned string contains null
        characters ('\0') if they occurred in the input.
        """
        _complain_ifclosed(self.closed)
        if self.buflist:
            self.buf += ''.join(self.buflist)
            self.buflist = []
        i = self.buf.find('\n', self.pos)
        if i < 0:
            newpos = self.len
        else:
            newpos = i+1
        if length is not None and length >= 0:
            if self.pos + length < newpos:
                newpos = self.pos + length
        r = self.buf[self.pos:newpos]
        self.pos = newpos
        return r

    def readlines(self, sizehint = 0):
        """Read until EOF using readline() and return a list containing the
        lines thus read.

        If the optional sizehint argument is present, instead of reading up
        to EOF, whole lines totalling approximately sizehint bytes (or more
        to accommodate a final whole line).
        """
        total = 0
        lines = []
        line = self.readline()
        while line:
            lines.append(line)
            total += len(line)
            if 0 < sizehint <= total:
                break
            line = self.readline()
        return lines

    def truncate(self, size=None):
        """Truncate the file's size.

        If the optional size argument is present, the file is truncated to
        (at most) that size. The size defaults to the current position.
        The current file position is not changed unless the position
        is beyond the new file size.

        If the specified size exceeds the file's current size, the
        file remains unchanged.
        """
        _complain_ifclosed(self.closed)
        if size is None:
            size = self.pos
        elif size < 0:
            raise IOError(EINVAL, "Negative size not allowed")
        elif size < self.pos:
            self.pos = size
        self.buf = self.getvalue()[:size]
        self.len = size

    def write(self, s):
        """Write a string to the file.

        There is no return value.
        """
        _complain_ifclosed(self.closed)
        if not s: return
        # Force s to be a string or unicode
        if not isinstance(s, basestring):
            s = str(s)
        spos = self.pos
        slen = self.len
        if spos == slen:
            self.buflist.append(s)
            self.len = self.pos = spos + len(s)
            return
        if spos > slen:
            self.buflist.append('\0'*(spos - slen))
            slen = spos
        newpos = spos + len(s)
        if spos < slen:
            if self.buflist:
                self.buf += ''.join(self.buflist)
            self.buflist = [self.buf[:spos], s, self.buf[newpos:]]
            self.buf = ''
            if newpos > slen:
                slen = newpos
        else:
            self.buflist.append(s)
            slen = newpos
        self.len = slen
        self.pos = newpos

    def writelines(self, iterable):
        """Write a sequence of strings to the file. The sequence can be any
        iterable object producing strings, typically a list of strings. There
        is no return value.

        (The name is intended to match readlines(); writelines() does not add
        line separators.)
        """
        write = self.write
        for line in iterable:
            write(line)

    def flush(self):
        """Flush the internal buffer
        """
        _complain_ifclosed(self.closed)

    def getvalue(self):
        """
        Retrieve the entire contents of the "file" at any time before
        the StringIO object's close() method is called.

        The StringIO object can accept either Unicode or 8-bit strings,
        but mixing the two may take some care. If both are used, 8-bit
        strings that cannot be interpreted as 7-bit ASCII (that use the
        8th bit) will cause a UnicodeError to be raised when getvalue()
        is called.
        """
        _complain_ifclosed(self.closed)
        if self.buflist:
            self.buf += ''.join(self.buflist)
            self.buflist = []
        return self.buf


# A little test suite

def test():
    import sys
    if sys.argv[1:]:
        file = sys.argv[1]
    else:
        file = '/etc/passwd'
    lines = open(file, 'r').readlines()
    text = open(file, 'r').read()
    f = StringIO()
    for line in lines[:-2]:
        f.write(line)
    f.writelines(lines[-2:])
    if f.getvalue() != text:
        raise RuntimeError, 'write failed'
    length = f.tell()
    print 'File length =', length
    f.seek(len(lines[0]))
    f.write(lines[1])
    f.seek(0)
    print 'First line =', repr(f.readline())
    print 'Position =', f.tell()
    line = f.readline()
    print 'Second line =', repr(line)
    f.seek(-len(line), 1)
    line2 = f.read(len(line))
    if line != line2:
        raise RuntimeError, 'bad result after seek back'
    f.seek(len(line2), 1)
    list = f.readlines()
    line = list[-1]
    f.seek(f.tell() - len(line))
    line2 = f.read()
    if line != line2:
        raise RuntimeError, 'bad result after seek back from EOF'
    print 'Read', len(list), 'more lines'
    print 'File length =', f.tell()
    if f.tell() != length:
        raise RuntimeError, 'bad length'
    f.truncate(length/2)
    f.seek(0, 2)
    print 'Truncated length =', f.tell()
    if f.tell() != length/2:
        raise RuntimeError, 'truncate did not adjust length'
    f.close()

if __name__ == '__main__':
    test()
""" robotparser.py

    Copyright (C) 2000  Bastian Kleineidam

    You can choose between two licenses when using this package:
    1) GNU GPLv2
    2) PSF license for Python 2.2

    The robots.txt Exclusion Protocol is implemented as specified in
    http://www.robotstxt.org/norobots-rfc.txt

"""
import urlparse
import urllib

__all__ = ["RobotFileParser"]


class RobotFileParser:
    """ This class provides a set of methods to read, parse and answer
    questions about a single robots.txt file.

    """

    def __init__(self, url=''):
        self.entries = []
        self.default_entry = None
        self.disallow_all = False
        self.allow_all = False
        self.set_url(url)
        self.last_checked = 0

    def mtime(self):
        """Returns the time the robots.txt file was last fetched.

        This is useful for long-running web spiders that need to
        check for new robots.txt files periodically.

        """
        return self.last_checked

    def modified(self):
        """Sets the time the robots.txt file was last fetched to the
        current time.

        """
        import time
        self.last_checked = time.time()

    def set_url(self, url):
        """Sets the URL referring to a robots.txt file."""
        self.url = url
        self.host, self.path = urlparse.urlparse(url)[1:3]

    def read(self):
        """Reads the robots.txt URL and feeds it to the parser."""
        opener = URLopener()
        f = opener.open(self.url)
        lines = [line.strip() for line in f]
        f.close()
        self.errcode = opener.errcode
        if self.errcode in (401, 403):
            self.disallow_all = True
        elif self.errcode >= 400 and self.errcode < 500:
            self.allow_all = True
        elif self.errcode == 200 and lines:
            self.parse(lines)

    def _add_entry(self, entry):
        if "*" in entry.useragents:
            # the default entry is considered last
            if self.default_entry is None:
                # the first default entry wins
                self.default_entry = entry
        else:
            self.entries.append(entry)

    def parse(self, lines):
        """parse the input lines from a robots.txt file.
           We allow that a user-agent: line is not preceded by
           one or more blank lines."""
        # states:
        #   0: start state
        #   1: saw user-agent line
        #   2: saw an allow or disallow line
        state = 0
        linenumber = 0
        entry = Entry()

        self.modified()
        for line in lines:
            linenumber += 1
            if not line:
                if state == 1:
                    entry = Entry()
                    state = 0
                elif state == 2:
                    self._add_entry(entry)
                    entry = Entry()
                    state = 0
            # remove optional comment and strip line
            i = line.find('#')
            if i >= 0:
                line = line[:i]
            line = line.strip()
            if not line:
                continue
            line = line.split(':', 1)
            if len(line) == 2:
                line[0] = line[0].strip().lower()
                line[1] = urllib.unquote(line[1].strip())
                if line[0] == "user-agent":
                    if state == 2:
                        self._add_entry(entry)
                        entry = Entry()
                    entry.useragents.append(line[1])
                    state = 1
                elif line[0] == "disallow":
                    if state != 0:
                        entry.rulelines.append(RuleLine(line[1], False))
                        state = 2
                elif line[0] == "allow":
                    if state != 0:
                        entry.rulelines.append(RuleLine(line[1], True))
                        state = 2
        if state == 2:
            self._add_entry(entry)


    def can_fetch(self, useragent, url):
        """using the parsed robots.txt decide if useragent can fetch url"""
        if self.disallow_all:
            return False
        if self.allow_all:
            return True

        # Until the robots.txt file has been read or found not
        # to exist, we must assume that no url is allowable.
        # This prevents false positives when a user erroneously
        # calls can_fetch() before calling read().
        if not self.last_checked:
            return False

        # search for given user agent matches
        # the first match counts
        parsed_url = urlparse.urlparse(urllib.unquote(url))
        url = urlparse.urlunparse(('', '', parsed_url.path,
            parsed_url.params, parsed_url.query, parsed_url.fragment))
        url = urllib.quote(url)
        if not url:
            url = "/"
        for entry in self.entries:
            if entry.applies_to(useragent):
                return entry.allowance(url)
        # try the default entry last
        if self.default_entry:
            return self.default_entry.allowance(url)
        # agent not found ==> access granted
        return True


    def __str__(self):
        entries = self.entries
        if self.default_entry is not None:
            entries = entries + [self.default_entry]
        return '\n'.join(map(str, entries)) + '\n'


class RuleLine:
    """A rule line is a single "Allow:" (allowance==True) or "Disallow:"
       (allowance==False) followed by a path."""
    def __init__(self, path, allowance):
        if path == '' and not allowance:
            # an empty value means allow all
            allowance = True
        path = urlparse.urlunparse(urlparse.urlparse(path))
        self.path = urllib.quote(path)
        self.allowance = allowance

    def applies_to(self, filename):
        return self.path == "*" or filename.startswith(self.path)

    def __str__(self):
        return (self.allowance and "Allow" or "Disallow") + ": " + self.path


class Entry:
    """An entry has one or more user-agents and zero or more rulelines"""
    def __init__(self):
        self.useragents = []
        self.rulelines = []

    def __str__(self):
        ret = []
        for agent in self.useragents:
            ret.extend(["User-agent: ", agent, "\n"])
        for line in self.rulelines:
            ret.extend([str(line), "\n"])
        return ''.join(ret)

    def applies_to(self, useragent):
        """check if this entry applies to the specified agent"""
        # split the name token and make it lower case
        useragent = useragent.split("/")[0].lower()
        for agent in self.useragents:
            if agent == '*':
                # we have the catch-all agent
                return True
            agent = agent.lower()
            if agent in useragent:
                return True
        return False

    def allowance(self, filename):
        """Preconditions:
        - our agent applies to this entry
        - filename is URL decoded"""
        for line in self.rulelines:
            if line.applies_to(filename):
                return line.allowance
        return True

class URLopener(urllib.FancyURLopener):
    def __init__(self, *args):
        urllib.FancyURLopener.__init__(self, *args)
        self.errcode = 200

    def prompt_user_passwd(self, host, realm):
        ## If robots.txt file is accessible only with a password,
        ## we act as if the file wasn't there.
        return None, None

    def http_error_default(self, url, fp, errcode, errmsg, headers):
        self.errcode = errcode
        return urllib.FancyURLopener.http_error_default(self, url, fp, errcode,
                                                        errmsg, headers)
�
zfc#@sodZddlZddlZddlTddlmZddlmZmZmZm	Z	m
Z
mZmZm
Z
dddd	d
ddd
ddddddddddddddddddd d!d"d#d$d%d&d'd(g#Zd)Zd*Zd)Zd+Zd,Zd-Zd.Zd/Zd0�Zd1�Zd2�Zd3�Zd4�ZeZd5�Zd6�Zd7�Z e Z!e"Z#dS(8s�Common pathname manipulations, OS/2 EMX version.

Instead of importing this module directly, import os and refer to this
module as os.path.
i����N(t*(t_unicode(t
expandusert
expandvarstisabstislinkt
splitdrivetsplitexttsplittwalktnormcaseRtjoinRRRtbasenametdirnametcommonprefixtgetsizetgetmtimetgetatimetgetctimeRtexiststlexiststisdirtisfiletismountR	RRtnormpathtabspathtsplitunctcurdirtpardirtseptpathseptdefpathtaltseptextseptdevnulltrealpathtsupports_unicode_filenamest.s..t/s\t;s.;C:\bintnulcCs|jdd�j�S(sZNormalize case of pathname.

    Makes all characters lowercase and all altseps into seps.s\R&(treplacetlower(ts((s"/usr/lib64/python2.7/os2emxpath.pyR
$scGsg|}xZ|D]R}t|�r(|}q
|dksD|ddkrQ||}q
|d|}q
W|S(s=Join two or more pathname components, inserting sep as neededti����s/\:R&(R(tatptpathtb((s"/usr/lib64/python2.7/os2emxpath.pyR-s
	
cCs�|dd!dkrd|fS|dd!}|d	ksB|d
kr�t|�}|jdd�}|dkrvd|fS|jd|d�}|dkr�t|�}n|| ||fSd|fS(s?Split a pathname into UNC mount point and relative path specifiers.

    Return a 2-tuple (unc, rest); either part may be empty.
    If unc is not empty, it has the form '//host/mount' (or similar
    using backslashes).  unc+rest is always the input path.
    Paths containing drive letters never have a UNC part.
    iit:R,iR&s\i����s//s\\(R
tfindtlen(R.tfirstTwotnormptindex((s"/usr/lib64/python2.7/os2emxpath.pyR;s


cCst|�dS(s)Returns the final component of a pathnamei(R(R.((s"/usr/lib64/python2.7/os2emxpath.pyRYscCst|�dS(s-Returns the directory component of a pathnamei(R(R.((s"/usr/lib64/python2.7/os2emxpath.pyR
`scCsRt|�\}}|r"|dkSt|�d}t|�dkoQ|ddkS(s?Test whether a path is a mount point (defined as root of drive)R,R&s\iis/\(R,R&s\(RRR3(R/tunctrestR.((s"/usr/lib64/python2.7/os2emxpath.pyRns

cCsY|jdd�}t|�\}}x(|d dkrN|d}|d}q'W|jd�}d}x�|t|�kr)||dkr�||=qg||dkr�|dkr�||ddkr�||d|d5|d}qg||dkr|dkr||ddkr||=qg|d}qgW|rH|rH|jd�n|dj|�S(	s0Normalize path, eliminating double slashes, etc.s\R&iiR%s..R,(R,s..(R)RRR3tappendR(R/tprefixtcompsti((s"/usr/lib64/python2.7/os2emxpath.pyRys&

0
0
cCsRt|�sHt|t�r*tj�}ntj�}t||�}nt|�S(s%Return the absolute version of a path(Rt
isinstanceRtostgetcwdutgetcwdRR(R/tcwd((s"/usr/lib64/python2.7/os2emxpath.pyR�s($t__doc__R>tstattgenericpathRtntpathRRRRRRRR	t__all__RRR!RR RRR"R
RRRR
RRRRRR#tFalseR$(((s"/usr/lib64/python2.7/os2emxpath.pyt<module>s<
:									"""An extensible library for opening URLs using a variety of protocols

The simplest way to use this module is to call the urlopen function,
which accepts a string containing a URL or a Request object (described
below).  It opens the URL and returns the results as file-like
object; the returned object has some extra methods described below.

The OpenerDirector manages a collection of Handler objects that do
all the actual work.  Each Handler implements a particular protocol or
option.  The OpenerDirector is a composite object that invokes the
Handlers needed to open the requested URL.  For example, the
HTTPHandler performs HTTP GET and POST requests and deals with
non-error returns.  The HTTPRedirectHandler automatically deals with
HTTP 301, 302, 303 and 307 redirect errors, and the HTTPDigestAuthHandler
deals with digest authentication.

urlopen(url, data=None) -- Basic usage is the same as original
urllib.  pass the url and optionally data to post to an HTTP URL, and
get a file-like object back.  One difference is that you can also pass
a Request instance instead of URL.  Raises a URLError (subclass of
IOError); for HTTP errors, raises an HTTPError, which can also be
treated as a valid response.

build_opener -- Function that creates a new OpenerDirector instance.
Will install the default handlers.  Accepts one or more Handlers as
arguments, either instances or Handler classes that it will
instantiate.  If one of the argument is a subclass of the default
handler, the argument will be installed instead of the default.

install_opener -- Installs a new opener as the default opener.

objects of interest:

OpenerDirector -- Sets up the User Agent as the Python-urllib client and manages
the Handler classes, while dealing with requests and responses.

Request -- An object that encapsulates the state of a request.  The
state can be as simple as the URL.  It can also include extra HTTP
headers, e.g. a User-Agent.

BaseHandler --

exceptions:
URLError -- A subclass of IOError, individual protocols have their own
specific subclass.

HTTPError -- Also a valid HTTP response, so you can treat an HTTP error
as an exceptional event or valid response.

internals:
BaseHandler and parent
_call_chain conventions

Example usage:

import urllib2

# set up authentication info
authinfo = urllib2.HTTPBasicAuthHandler()
authinfo.add_password(realm='PDQ Application',
                      uri='https://mahler:8092/site-updates.py',
                      user='klem',
                      passwd='geheim$parole')

proxy_support = urllib2.ProxyHandler({"http" : "http://ahad-haam:3128"})

# build a new opener that adds authentication and caching FTP handlers
opener = urllib2.build_opener(proxy_support, authinfo, urllib2.CacheFTPHandler)

# install it
urllib2.install_opener(opener)

f = urllib2.urlopen('http://www.python.org/')


"""

# XXX issues:
# If an authentication error handler that tries to perform
# authentication for some reason but fails, how should the error be
# signalled?  The client needs to know the HTTP error code.  But if
# the handler knows that the problem was, e.g., that it didn't know
# that hash algo that requested in the challenge, it would be good to
# pass that information along to the client, too.
# ftp errors aren't handled cleanly
# check digest against correct (i.e. non-apache) implementation

# Possible extensions:
# complex proxies  XXX not sure what exactly was meant by this
# abstract factory for opener

import base64
import hashlib
import httplib
import mimetools
import os
import posixpath
import random
import re
import socket
import sys
import time
import urlparse
import bisect
import warnings

try:
    from cStringIO import StringIO
except ImportError:
    from StringIO import StringIO

# check for SSL
try:
    import ssl
except ImportError:
    _have_ssl = False
else:
    _have_ssl = True

from urllib import (unwrap, unquote, splittype, splithost, quote,
     addinfourl, splitport, splittag, toBytes,
     splitattr, ftpwrapper, splituser, splitpasswd, splitvalue)

# support for FileHandler, proxies via environment variables
from urllib import localhost, url2pathname, getproxies, proxy_bypass

# used in User-Agent header sent
__version__ = sys.version[:3]

_opener = None
def urlopen(url, data=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
            cafile=None, capath=None, cadefault=False, context=None):
    global _opener
    if cafile or capath or cadefault:
        if context is not None:
            raise ValueError(
                "You can't pass both context and any of cafile, capath, and "
                "cadefault"
            )
        if not _have_ssl:
            raise ValueError('SSL support not available')
        context = ssl.create_default_context(purpose=ssl.Purpose.SERVER_AUTH,
                                             cafile=cafile,
                                             capath=capath)
        https_handler = HTTPSHandler(context=context)
        opener = build_opener(https_handler)
    elif context:
        https_handler = HTTPSHandler(context=context)
        opener = build_opener(https_handler)
    elif _opener is None:
        _opener = opener = build_opener()
    else:
        opener = _opener
    return opener.open(url, data, timeout)

def install_opener(opener):
    global _opener
    _opener = opener

# do these error classes make sense?
# make sure all of the IOError stuff is overridden.  we just want to be
# subtypes.

class URLError(IOError):
    # URLError is a sub-type of IOError, but it doesn't share any of
    # the implementation.  need to override __init__ and __str__.
    # It sets self.args for compatibility with other EnvironmentError
    # subclasses, but args doesn't have the typical format with errno in
    # slot 0 and strerror in slot 1.  This may be better than nothing.
    def __init__(self, reason):
        self.args = reason,
        self.reason = reason

    def __str__(self):
        return '<urlopen error %s>' % self.reason

class HTTPError(URLError, addinfourl):
    """Raised when HTTP error occurs, but also acts like non-error return"""
    __super_init = addinfourl.__init__

    def __init__(self, url, code, msg, hdrs, fp):
        self.code = code
        self.msg = msg
        self.hdrs = hdrs
        self.fp = fp
        self.filename = url
        # The addinfourl classes depend on fp being a valid file
        # object.  In some cases, the HTTPError may not have a valid
        # file object.  If this happens, the simplest workaround is to
        # not initialize the base classes.
        if fp is not None:
            self.__super_init(fp, hdrs, url, code)

    def __str__(self):
        return 'HTTP Error %s: %s' % (self.code, self.msg)

    # since URLError specifies a .reason attribute, HTTPError should also
    #  provide this attribute. See issue13211 fo discussion.
    @property
    def reason(self):
        return self.msg

    def info(self):
        return self.hdrs

# copied from cookielib.py
_cut_port_re = re.compile(r":\d+$")
def request_host(request):
    """Return request-host, as defined by RFC 2965.

    Variation from RFC: returned value is lowercased, for convenient
    comparison.

    """
    url = request.get_full_url()
    host = urlparse.urlparse(url)[1]
    if host == "":
        host = request.get_header("Host", "")

    # remove port, if present
    host = _cut_port_re.sub("", host, 1)
    return host.lower()

class Request:

    def __init__(self, url, data=None, headers={},
                 origin_req_host=None, unverifiable=False):
        # unwrap('<URL:type://host/path>') --> 'type://host/path'
        self.__original = unwrap(url)
        self.__original, self.__fragment = splittag(self.__original)
        self.type = None
        # self.__r_type is what's left after doing the splittype
        self.host = None
        self.port = None
        self._tunnel_host = None
        self.data = data
        self.headers = {}
        for key, value in headers.items():
            self.add_header(key, value)
        self.unredirected_hdrs = {}
        if origin_req_host is None:
            origin_req_host = request_host(self)
        self.origin_req_host = origin_req_host
        self.unverifiable = unverifiable

    def __getattr__(self, attr):
        # XXX this is a fallback mechanism to guard against these
        # methods getting called in a non-standard order.  this may be
        # too complicated and/or unnecessary.
        # XXX should the __r_XXX attributes be public?
        if attr in ('_Request__r_type', '_Request__r_host'):
            getattr(self, 'get_' + attr[12:])()
            return self.__dict__[attr]
        raise AttributeError, attr

    def get_method(self):
        if self.has_data():
            return "POST"
        else:
            return "GET"

    # XXX these helper methods are lame

    def add_data(self, data):
        self.data = data

    def has_data(self):
        return self.data is not None

    def get_data(self):
        return self.data

    def get_full_url(self):
        if self.__fragment:
            return '%s#%s' % (self.__original, self.__fragment)
        else:
            return self.__original

    def get_type(self):
        if self.type is None:
            self.type, self.__r_type = splittype(self.__original)
            if self.type is None:
                raise ValueError, "unknown url type: %s" % self.__original
        return self.type

    def get_host(self):
        if self.host is None:
            self.host, self.__r_host = splithost(self.__r_type)
            if self.host:
                self.host = unquote(self.host)
        return self.host

    def get_selector(self):
        return self.__r_host

    def set_proxy(self, host, type):
        if self.type == 'https' and not self._tunnel_host:
            self._tunnel_host = self.host
        else:
            self.type = type
            self.__r_host = self.__original

        self.host = host

    def has_proxy(self):
        return self.__r_host == self.__original

    def get_origin_req_host(self):
        return self.origin_req_host

    def is_unverifiable(self):
        return self.unverifiable

    def add_header(self, key, val):
        # useful for something like authentication
        self.headers[key.capitalize()] = val

    def add_unredirected_header(self, key, val):
        # will not be added to a redirected request
        self.unredirected_hdrs[key.capitalize()] = val

    def has_header(self, header_name):
        return (header_name in self.headers or
                header_name in self.unredirected_hdrs)

    def get_header(self, header_name, default=None):
        return self.headers.get(
            header_name,
            self.unredirected_hdrs.get(header_name, default))

    def header_items(self):
        hdrs = self.unredirected_hdrs.copy()
        hdrs.update(self.headers)
        return hdrs.items()

class OpenerDirector:
    def __init__(self):
        client_version = "Python-urllib/%s" % __version__
        self.addheaders = [('User-agent', client_version)]
        # self.handlers is retained only for backward compatibility
        self.handlers = []
        # manage the individual handlers
        self.handle_open = {}
        self.handle_error = {}
        self.process_response = {}
        self.process_request = {}

    def add_handler(self, handler):
        if not hasattr(handler, "add_parent"):
            raise TypeError("expected BaseHandler instance, got %r" %
                            type(handler))

        added = False
        for meth in dir(handler):
            if meth in ["redirect_request", "do_open", "proxy_open"]:
                # oops, coincidental match
                continue

            i = meth.find("_")
            protocol = meth[:i]
            condition = meth[i+1:]

            if condition.startswith("error"):
                j = condition.find("_") + i + 1
                kind = meth[j+1:]
                try:
                    kind = int(kind)
                except ValueError:
                    pass
                lookup = self.handle_error.get(protocol, {})
                self.handle_error[protocol] = lookup
            elif condition == "open":
                kind = protocol
                lookup = self.handle_open
            elif condition == "response":
                kind = protocol
                lookup = self.process_response
            elif condition == "request":
                kind = protocol
                lookup = self.process_request
            else:
                continue

            handlers = lookup.setdefault(kind, [])
            if handlers:
                bisect.insort(handlers, handler)
            else:
                handlers.append(handler)
            added = True

        if added:
            bisect.insort(self.handlers, handler)
            handler.add_parent(self)

    def close(self):
        # Only exists for backwards compatibility.
        pass

    def _call_chain(self, chain, kind, meth_name, *args):
        # Handlers raise an exception if no one else should try to handle
        # the request, or return None if they can't but another handler
        # could.  Otherwise, they return the response.
        handlers = chain.get(kind, ())
        for handler in handlers:
            func = getattr(handler, meth_name)

            result = func(*args)
            if result is not None:
                return result

    def open(self, fullurl, data=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT):
        # accept a URL or a Request object
        if isinstance(fullurl, basestring):
            req = Request(fullurl, data)
        else:
            req = fullurl
            if data is not None:
                req.add_data(data)

        req.timeout = timeout
        protocol = req.get_type()

        # pre-process request
        meth_name = protocol+"_request"
        for processor in self.process_request.get(protocol, []):
            meth = getattr(processor, meth_name)
            req = meth(req)

        response = self._open(req, data)

        # post-process response
        meth_name = protocol+"_response"
        for processor in self.process_response.get(protocol, []):
            meth = getattr(processor, meth_name)
            response = meth(req, response)

        return response

    def _open(self, req, data=None):
        result = self._call_chain(self.handle_open, 'default',
                                  'default_open', req)
        if result:
            return result

        protocol = req.get_type()
        result = self._call_chain(self.handle_open, protocol, protocol +
                                  '_open', req)
        if result:
            return result

        return self._call_chain(self.handle_open, 'unknown',
                                'unknown_open', req)

    def error(self, proto, *args):
        if proto in ('http', 'https'):
            # XXX http[s] protocols are special-cased
            dict = self.handle_error['http'] # https is not different than http
            proto = args[2]  # YUCK!
            meth_name = 'http_error_%s' % proto
            http_err = 1
            orig_args = args
        else:
            dict = self.handle_error
            meth_name = proto + '_error'
            http_err = 0
        args = (dict, proto, meth_name) + args
        result = self._call_chain(*args)
        if result:
            return result

        if http_err:
            args = (dict, 'default', 'http_error_default') + orig_args
            return self._call_chain(*args)

# XXX probably also want an abstract factory that knows when it makes
# sense to skip a superclass in favor of a subclass and when it might
# make sense to include both

def build_opener(*handlers):
    """Create an opener object from a list of handlers.

    The opener will use several default handlers, including support
    for HTTP, FTP and when applicable, HTTPS.

    If any of the handlers passed as arguments are subclasses of the
    default handlers, the default handlers will not be used.
    """
    import types
    def isclass(obj):
        return isinstance(obj, (types.ClassType, type))

    opener = OpenerDirector()
    default_classes = [ProxyHandler, UnknownHandler, HTTPHandler,
                       HTTPDefaultErrorHandler, HTTPRedirectHandler,
                       FTPHandler, FileHandler, HTTPErrorProcessor]
    if hasattr(httplib, 'HTTPS'):
        default_classes.append(HTTPSHandler)
    skip = set()
    for klass in default_classes:
        for check in handlers:
            if isclass(check):
                if issubclass(check, klass):
                    skip.add(klass)
            elif isinstance(check, klass):
                skip.add(klass)
    for klass in skip:
        default_classes.remove(klass)

    for klass in default_classes:
        opener.add_handler(klass())

    for h in handlers:
        if isclass(h):
            h = h()
        opener.add_handler(h)
    return opener

class BaseHandler:
    handler_order = 500

    def add_parent(self, parent):
        self.parent = parent

    def close(self):
        # Only exists for backwards compatibility
        pass

    def __lt__(self, other):
        if not hasattr(other, "handler_order"):
            # Try to preserve the old behavior of having custom classes
            # inserted after default ones (works only for custom user
            # classes which are not aware of handler_order).
            return True
        return self.handler_order < other.handler_order


class HTTPErrorProcessor(BaseHandler):
    """Process HTTP error responses."""
    handler_order = 1000  # after all other processing

    def http_response(self, request, response):
        code, msg, hdrs = response.code, response.msg, response.info()

        # According to RFC 2616, "2xx" code indicates that the client's
        # request was successfully received, understood, and accepted.
        if not (200 <= code < 300):
            response = self.parent.error(
                'http', request, response, code, msg, hdrs)

        return response

    https_response = http_response

class HTTPDefaultErrorHandler(BaseHandler):
    def http_error_default(self, req, fp, code, msg, hdrs):
        raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)

class HTTPRedirectHandler(BaseHandler):
    # maximum number of redirections to any single URL
    # this is needed because of the state that cookies introduce
    max_repeats = 4
    # maximum total number of redirections (regardless of URL) before
    # assuming we're in a loop
    max_redirections = 10

    def redirect_request(self, req, fp, code, msg, headers, newurl):
        """Return a Request or None in response to a redirect.

        This is called by the http_error_30x methods when a
        redirection response is received.  If a redirection should
        take place, return a new Request to allow http_error_30x to
        perform the redirect.  Otherwise, raise HTTPError if no-one
        else should try to handle this url.  Return None if you can't
        but another Handler might.
        """
        m = req.get_method()
        if (code in (301, 302, 303, 307) and m in ("GET", "HEAD")
            or code in (301, 302, 303) and m == "POST"):
            # Strictly (according to RFC 2616), 301 or 302 in response
            # to a POST MUST NOT cause a redirection without confirmation
            # from the user (of urllib2, in this case).  In practice,
            # essentially all clients do redirect in this case, so we
            # do the same.
            # be conciliant with URIs containing a space
            newurl = newurl.replace(' ', '%20')
            newheaders = dict((k,v) for k,v in req.headers.items()
                              if k.lower() not in ("content-length", "content-type")
                             )
            return Request(newurl,
                           headers=newheaders,
                           origin_req_host=req.get_origin_req_host(),
                           unverifiable=True)
        else:
            raise HTTPError(req.get_full_url(), code, msg, headers, fp)

    # Implementation note: To avoid the server sending us into an
    # infinite loop, the request object needs to track what URLs we
    # have already seen.  Do this by adding a handler-specific
    # attribute to the Request object.
    def http_error_302(self, req, fp, code, msg, headers):
        # Some servers (incorrectly) return multiple Location headers
        # (so probably same goes for URI).  Use first header.
        if 'location' in headers:
            newurl = headers.getheaders('location')[0]
        elif 'uri' in headers:
            newurl = headers.getheaders('uri')[0]
        else:
            return

        # fix a possible malformed URL
        urlparts = urlparse.urlparse(newurl)
        if not urlparts.path and urlparts.netloc:
            urlparts = list(urlparts)
            urlparts[2] = "/"
        newurl = urlparse.urlunparse(urlparts)

        newurl = urlparse.urljoin(req.get_full_url(), newurl)

        # For security reasons we do not allow redirects to protocols
        # other than HTTP, HTTPS or FTP.
        newurl_lower = newurl.lower()
        if not (newurl_lower.startswith('http://') or
                newurl_lower.startswith('https://') or
                newurl_lower.startswith('ftp://')):
            raise HTTPError(newurl, code,
                            msg + " - Redirection to url '%s' is not allowed" %
                            newurl,
                            headers, fp)

        # XXX Probably want to forget about the state of the current
        # request, although that might interact poorly with other
        # handlers that also use handler-specific request attributes
        new = self.redirect_request(req, fp, code, msg, headers, newurl)
        if new is None:
            return

        # loop detection
        # .redirect_dict has a key url if url was previously visited.
        if hasattr(req, 'redirect_dict'):
            visited = new.redirect_dict = req.redirect_dict
            if (visited.get(newurl, 0) >= self.max_repeats or
                len(visited) >= self.max_redirections):
                raise HTTPError(req.get_full_url(), code,
                                self.inf_msg + msg, headers, fp)
        else:
            visited = new.redirect_dict = req.redirect_dict = {}
        visited[newurl] = visited.get(newurl, 0) + 1

        # Don't close the fp until we are sure that we won't use it
        # with HTTPError.
        fp.read()
        fp.close()

        return self.parent.open(new, timeout=req.timeout)

    http_error_301 = http_error_303 = http_error_307 = http_error_302

    inf_msg = "The HTTP server returned a redirect error that would " \
              "lead to an infinite loop.\n" \
              "The last 30x error message was:\n"


def _parse_proxy(proxy):
    """Return (scheme, user, password, host/port) given a URL or an authority.

    If a URL is supplied, it must have an authority (host:port) component.
    According to RFC 3986, having an authority component means the URL must
    have two slashes after the scheme:

    >>> _parse_proxy('file:/ftp.example.com/')
    Traceback (most recent call last):
    ValueError: proxy URL with no authority: 'file:/ftp.example.com/'

    The first three items of the returned tuple may be None.

    Examples of authority parsing:

    >>> _parse_proxy('proxy.example.com')
    (None, None, None, 'proxy.example.com')
    >>> _parse_proxy('proxy.example.com:3128')
    (None, None, None, 'proxy.example.com:3128')

    The authority component may optionally include userinfo (assumed to be
    username:password):

    >>> _parse_proxy('joe:password@proxy.example.com')
    (None, 'joe', 'password', 'proxy.example.com')
    >>> _parse_proxy('joe:password@proxy.example.com:3128')
    (None, 'joe', 'password', 'proxy.example.com:3128')

    Same examples, but with URLs instead:

    >>> _parse_proxy('http://proxy.example.com/')
    ('http', None, None, 'proxy.example.com')
    >>> _parse_proxy('http://proxy.example.com:3128/')
    ('http', None, None, 'proxy.example.com:3128')
    >>> _parse_proxy('http://joe:password@proxy.example.com/')
    ('http', 'joe', 'password', 'proxy.example.com')
    >>> _parse_proxy('http://joe:password@proxy.example.com:3128')
    ('http', 'joe', 'password', 'proxy.example.com:3128')

    Everything after the authority is ignored:

    >>> _parse_proxy('ftp://joe:password@proxy.example.com/rubbish:3128')
    ('ftp', 'joe', 'password', 'proxy.example.com')

    Test for no trailing '/' case:

    >>> _parse_proxy('http://joe:password@proxy.example.com')
    ('http', 'joe', 'password', 'proxy.example.com')

    """
    scheme, r_scheme = splittype(proxy)
    if not r_scheme.startswith("/"):
        # authority
        scheme = None
        authority = proxy
    else:
        # URL
        if not r_scheme.startswith("//"):
            raise ValueError("proxy URL with no authority: %r" % proxy)
        # We have an authority, so for RFC 3986-compliant URLs (by ss 3.
        # and 3.3.), path is empty or starts with '/'
        end = r_scheme.find("/", 2)
        if end == -1:
            end = None
        authority = r_scheme[2:end]
    userinfo, hostport = splituser(authority)
    if userinfo is not None:
        user, password = splitpasswd(userinfo)
    else:
        user = password = None
    return scheme, user, password, hostport

class ProxyHandler(BaseHandler):
    # Proxies must be in front
    handler_order = 100

    def __init__(self, proxies=None):
        if proxies is None:
            proxies = getproxies()
        assert hasattr(proxies, 'has_key'), "proxies must be a mapping"
        self.proxies = proxies
        for type, url in proxies.items():
            setattr(self, '%s_open' % type,
                    lambda r, proxy=url, type=type, meth=self.proxy_open: \
                    meth(r, proxy, type))

    def proxy_open(self, req, proxy, type):
        orig_type = req.get_type()
        proxy_type, user, password, hostport = _parse_proxy(proxy)

        if proxy_type is None:
            proxy_type = orig_type

        req.get_host()

        if req.host and proxy_bypass(req.host):
            return None

        if user and password:
            user_pass = '%s:%s' % (unquote(user), unquote(password))
            creds = base64.b64encode(user_pass).strip()
            req.add_header('Proxy-authorization', 'Basic ' + creds)
        hostport = unquote(hostport)
        req.set_proxy(hostport, proxy_type)

        if orig_type == proxy_type or orig_type == 'https':
            # let other handlers take care of it
            return None
        else:
            # need to start over, because the other handlers don't
            # grok the proxy's URL type
            # e.g. if we have a constructor arg proxies like so:
            # {'http': 'ftp://proxy.example.com'}, we may end up turning
            # a request for http://acme.example.com/a into one for
            # ftp://proxy.example.com/a
            return self.parent.open(req, timeout=req.timeout)

class HTTPPasswordMgr:

    def __init__(self):
        self.passwd = {}

    def add_password(self, realm, uri, user, passwd):
        # uri could be a single URI or a sequence
        if isinstance(uri, basestring):
            uri = [uri]
        if not realm in self.passwd:
            self.passwd[realm] = {}
        for default_port in True, False:
            reduced_uri = tuple(
                [self.reduce_uri(u, default_port) for u in uri])
            self.passwd[realm][reduced_uri] = (user, passwd)

    def find_user_password(self, realm, authuri):
        domains = self.passwd.get(realm, {})
        for default_port in True, False:
            reduced_authuri = self.reduce_uri(authuri, default_port)
            for uris, authinfo in domains.iteritems():
                for uri in uris:
                    if self.is_suburi(uri, reduced_authuri):
                        return authinfo
        return None, None

    def reduce_uri(self, uri, default_port=True):
        """Accept authority or URI and extract only the authority and path."""
        # note HTTP URLs do not have a userinfo component
        parts = urlparse.urlsplit(uri)
        if parts[1]:
            # URI
            scheme = parts[0]
            authority = parts[1]
            path = parts[2] or '/'
        else:
            # host or host:port
            scheme = None
            authority = uri
            path = '/'
        host, port = splitport(authority)
        if default_port and port is None and scheme is not None:
            dport = {"http": 80,
                     "https": 443,
                     }.get(scheme)
            if dport is not None:
                authority = "%s:%d" % (host, dport)
        return authority, path

    def is_suburi(self, base, test):
        """Check if test is below base in a URI tree

        Both args must be URIs in reduced form.
        """
        if base == test:
            return True
        if base[0] != test[0]:
            return False
        common = posixpath.commonprefix((base[1], test[1]))
        if len(common) == len(base[1]):
            return True
        return False


class HTTPPasswordMgrWithDefaultRealm(HTTPPasswordMgr):

    def find_user_password(self, realm, authuri):
        user, password = HTTPPasswordMgr.find_user_password(self, realm,
                                                            authuri)
        if user is not None:
            return user, password
        return HTTPPasswordMgr.find_user_password(self, None, authuri)


class AbstractBasicAuthHandler:

    # XXX this allows for multiple auth-schemes, but will stupidly pick
    # the last one with a realm specified.

    # allow for double- and single-quoted realm values
    # (single quotes are a violation of the RFC, but appear in the wild)
    rx = re.compile('(?:[^,]*,)*[ \t]*([^ \t,]+)[ \t]+'
                    'realm=(["\']?)([^"\']*)\\2', re.I)

    # XXX could pre-emptively send auth info already accepted (RFC 2617,
    # end of section 2, and section 1.2 immediately after "credentials"
    # production).

    def __init__(self, password_mgr=None):
        if password_mgr is None:
            password_mgr = HTTPPasswordMgr()
        self.passwd = password_mgr
        self.add_password = self.passwd.add_password


    def http_error_auth_reqed(self, authreq, host, req, headers):
        # host may be an authority (without userinfo) or a URL with an
        # authority
        # XXX could be multiple headers
        authreq = headers.get(authreq, None)

        if authreq:
            mo = AbstractBasicAuthHandler.rx.search(authreq)
            if mo:
                scheme, quote, realm = mo.groups()
                if quote not in ['"', "'"]:
                    warnings.warn("Basic Auth Realm was unquoted",
                                  UserWarning, 2)
                if scheme.lower() == 'basic':
                    return self.retry_http_basic_auth(host, req, realm)

    def retry_http_basic_auth(self, host, req, realm):
        user, pw = self.passwd.find_user_password(realm, host)
        if pw is not None:
            raw = "%s:%s" % (user, pw)
            auth = 'Basic %s' % base64.b64encode(raw).strip()
            if req.get_header(self.auth_header, None) == auth:
                return None
            req.add_unredirected_header(self.auth_header, auth)
            return self.parent.open(req, timeout=req.timeout)
        else:
            return None


class HTTPBasicAuthHandler(AbstractBasicAuthHandler, BaseHandler):

    auth_header = 'Authorization'

    def http_error_401(self, req, fp, code, msg, headers):
        url = req.get_full_url()
        response = self.http_error_auth_reqed('www-authenticate',
                                              url, req, headers)
        return response


class ProxyBasicAuthHandler(AbstractBasicAuthHandler, BaseHandler):

    auth_header = 'Proxy-authorization'

    def http_error_407(self, req, fp, code, msg, headers):
        # http_error_auth_reqed requires that there is no userinfo component in
        # authority.  Assume there isn't one, since urllib2 does not (and
        # should not, RFC 3986 s. 3.2.1) support requests for URLs containing
        # userinfo.
        authority = req.get_host()
        response = self.http_error_auth_reqed('proxy-authenticate',
                                          authority, req, headers)
        return response


def randombytes(n):
    """Return n random bytes."""
    # Use /dev/urandom if it is available.  Fall back to random module
    # if not.  It might be worthwhile to extend this function to use
    # other platform-specific mechanisms for getting random bytes.
    if os.path.exists("/dev/urandom"):
        f = open("/dev/urandom")
        s = f.read(n)
        f.close()
        return s
    else:
        L = [chr(random.randrange(0, 256)) for i in range(n)]
        return "".join(L)

class AbstractDigestAuthHandler:
    # Digest authentication is specified in RFC 2617.

    # XXX The client does not inspect the Authentication-Info header
    # in a successful response.

    # XXX It should be possible to test this implementation against
    # a mock server that just generates a static set of challenges.

    # XXX qop="auth-int" supports is shaky

    def __init__(self, passwd=None):
        if passwd is None:
            passwd = HTTPPasswordMgr()
        self.passwd = passwd
        self.add_password = self.passwd.add_password
        self.retried = 0
        self.nonce_count = 0
        self.last_nonce = None

    def reset_retry_count(self):
        self.retried = 0

    def http_error_auth_reqed(self, auth_header, host, req, headers):
        authreq = headers.get(auth_header, None)
        if self.retried > 5:
            # Don't fail endlessly - if we failed once, we'll probably
            # fail a second time. Hm. Unless the Password Manager is
            # prompting for the information. Crap. This isn't great
            # but it's better than the current 'repeat until recursion
            # depth exceeded' approach <wink>
            raise HTTPError(req.get_full_url(), 401, "digest auth failed",
                            headers, None)
        else:
            self.retried += 1
        if authreq:
            scheme = authreq.split()[0]
            if scheme.lower() == 'digest':
                return self.retry_http_digest_auth(req, authreq)

    def retry_http_digest_auth(self, req, auth):
        token, challenge = auth.split(' ', 1)
        chal = parse_keqv_list(parse_http_list(challenge))
        auth = self.get_authorization(req, chal)
        if auth:
            auth_val = 'Digest %s' % auth
            if req.headers.get(self.auth_header, None) == auth_val:
                return None
            req.add_unredirected_header(self.auth_header, auth_val)
            resp = self.parent.open(req, timeout=req.timeout)
            return resp

    def get_cnonce(self, nonce):
        # The cnonce-value is an opaque
        # quoted string value provided by the client and used by both client
        # and server to avoid chosen plaintext attacks, to provide mutual
        # authentication, and to provide some message integrity protection.
        # This isn't a fabulous effort, but it's probably Good Enough.
        dig = hashlib.sha1("%s:%s:%s:%s" % (self.nonce_count, nonce, time.ctime(),
                                            randombytes(8))).hexdigest()
        return dig[:16]

    def get_authorization(self, req, chal):
        try:
            realm = chal['realm']
            nonce = chal['nonce']
            qop = chal.get('qop')
            algorithm = chal.get('algorithm', 'MD5')
            # mod_digest doesn't send an opaque, even though it isn't
            # supposed to be optional
            opaque = chal.get('opaque', None)
        except KeyError:
            return None

        H, KD = self.get_algorithm_impls(algorithm)
        if H is None:
            return None

        user, pw = self.passwd.find_user_password(realm, req.get_full_url())
        if user is None:
            return None

        # XXX not implemented yet
        if req.has_data():
            entdig = self.get_entity_digest(req.get_data(), chal)
        else:
            entdig = None

        A1 = "%s:%s:%s" % (user, realm, pw)
        A2 = "%s:%s" % (req.get_method(),
                        # XXX selector: what about proxies and full urls
                        req.get_selector())
        if qop == 'auth':
            if nonce == self.last_nonce:
                self.nonce_count += 1
            else:
                self.nonce_count = 1
                self.last_nonce = nonce

            ncvalue = '%08x' % self.nonce_count
            cnonce = self.get_cnonce(nonce)
            noncebit = "%s:%s:%s:%s:%s" % (nonce, ncvalue, cnonce, qop, H(A2))
            respdig = KD(H(A1), noncebit)
        elif qop is None:
            respdig = KD(H(A1), "%s:%s" % (nonce, H(A2)))
        else:
            # XXX handle auth-int.
            raise URLError("qop '%s' is not supported." % qop)

        # XXX should the partial digests be encoded too?

        base = 'username="%s", realm="%s", nonce="%s", uri="%s", ' \
               'response="%s"' % (user, realm, nonce, req.get_selector(),
                                  respdig)
        if opaque:
            base += ', opaque="%s"' % opaque
        if entdig:
            base += ', digest="%s"' % entdig
        base += ', algorithm="%s"' % algorithm
        if qop:
            base += ', qop=auth, nc=%s, cnonce="%s"' % (ncvalue, cnonce)
        return base

    def get_algorithm_impls(self, algorithm):
        # algorithm should be case-insensitive according to RFC2617
        algorithm = algorithm.upper()
        # lambdas assume digest modules are imported at the top level
        if algorithm == 'MD5':
            H = lambda x: hashlib.md5(x).hexdigest()
        elif algorithm == 'SHA':
            H = lambda x: hashlib.sha1(x).hexdigest()
        # XXX MD5-sess
        else:
            raise ValueError("Unsupported digest authentication "
                             "algorithm %r" % algorithm.lower())
        KD = lambda s, d: H("%s:%s" % (s, d))
        return H, KD

    def get_entity_digest(self, data, chal):
        # XXX not implemented yet
        return None


class HTTPDigestAuthHandler(BaseHandler, AbstractDigestAuthHandler):
    """An authentication protocol defined by RFC 2069

    Digest authentication improves on basic authentication because it
    does not transmit passwords in the clear.
    """

    auth_header = 'Authorization'
    handler_order = 490  # before Basic auth

    def http_error_401(self, req, fp, code, msg, headers):
        host = urlparse.urlparse(req.get_full_url())[1]
        retry = self.http_error_auth_reqed('www-authenticate',
                                           host, req, headers)
        self.reset_retry_count()
        return retry


class ProxyDigestAuthHandler(BaseHandler, AbstractDigestAuthHandler):

    auth_header = 'Proxy-Authorization'
    handler_order = 490  # before Basic auth

    def http_error_407(self, req, fp, code, msg, headers):
        host = req.get_host()
        retry = self.http_error_auth_reqed('proxy-authenticate',
                                           host, req, headers)
        self.reset_retry_count()
        return retry

class AbstractHTTPHandler(BaseHandler):

    def __init__(self, debuglevel=0):
        self._debuglevel = debuglevel

    def set_http_debuglevel(self, level):
        self._debuglevel = level

    def do_request_(self, request):
        host = request.get_host()
        if not host:
            raise URLError('no host given')

        if request.has_data():  # POST
            data = request.get_data()
            if not request.has_header('Content-type'):
                request.add_unredirected_header(
                    'Content-type',
                    'application/x-www-form-urlencoded')
            if not request.has_header('Content-length'):
                request.add_unredirected_header(
                    'Content-length', '%d' % len(data))

        sel_host = host
        if request.has_proxy():
            scheme, sel = splittype(request.get_selector())
            sel_host, sel_path = splithost(sel)

        if not request.has_header('Host'):
            request.add_unredirected_header('Host', sel_host)
        for name, value in self.parent.addheaders:
            name = name.capitalize()
            if not request.has_header(name):
                request.add_unredirected_header(name, value)

        return request

    def do_open(self, http_class, req, **http_conn_args):
        """Return an addinfourl object for the request, using http_class.

        http_class must implement the HTTPConnection API from httplib.
        The addinfourl return value is a file-like object.  It also
        has methods and attributes including:
            - info(): return a mimetools.Message object for the headers
            - geturl(): return the original request URL
            - code: HTTP status code
        """
        host = req.get_host()
        if not host:
            raise URLError('no host given')

        # will parse host:port
        h = http_class(host, timeout=req.timeout, **http_conn_args)
        h.set_debuglevel(self._debuglevel)

        headers = dict(req.unredirected_hdrs)
        headers.update(dict((k, v) for k, v in req.headers.items()
                            if k not in headers))

        # We want to make an HTTP/1.1 request, but the addinfourl
        # class isn't prepared to deal with a persistent connection.
        # It will try to read all remaining data from the socket,
        # which will block while the server waits for the next request.
        # So make sure the connection gets closed after the (only)
        # request.
        headers["Connection"] = "close"
        headers = dict(
            (name.title(), val) for name, val in headers.items())

        if req._tunnel_host:
            tunnel_headers = {}
            proxy_auth_hdr = "Proxy-Authorization"
            if proxy_auth_hdr in headers:
                tunnel_headers[proxy_auth_hdr] = headers[proxy_auth_hdr]
                # Proxy-Authorization should not be sent to origin
                # server.
                del headers[proxy_auth_hdr]
            h.set_tunnel(req._tunnel_host, headers=tunnel_headers)

        try:
            h.request(req.get_method(), req.get_selector(), req.data, headers)
        except socket.error, err: # XXX what error?
            h.close()
            raise URLError(err)
        else:
            try:
                r = h.getresponse(buffering=True)
            except TypeError: # buffering kw not supported
                r = h.getresponse()

        # Pick apart the HTTPResponse object to get the addinfourl
        # object initialized properly.

        # Wrap the HTTPResponse object in socket's file object adapter
        # for Windows.  That adapter calls recv(), so delegate recv()
        # to read().  This weird wrapping allows the returned object to
        # have readline() and readlines() methods.

        # XXX It might be better to extract the read buffering code
        # out of socket._fileobject() and into a base class.

        r.recv = r.read
        fp = socket._fileobject(r, close=True)

        resp = addinfourl(fp, r.msg, req.get_full_url())
        resp.code = r.status
        resp.msg = r.reason
        return resp


class HTTPHandler(AbstractHTTPHandler):

    def http_open(self, req):
        return self.do_open(httplib.HTTPConnection, req)

    http_request = AbstractHTTPHandler.do_request_

if hasattr(httplib, 'HTTPS'):
    class HTTPSHandler(AbstractHTTPHandler):

        def __init__(self, debuglevel=0, context=None):
            AbstractHTTPHandler.__init__(self, debuglevel)
            self._context = context

        def https_open(self, req):
            return self.do_open(httplib.HTTPSConnection, req,
                context=self._context)

        https_request = AbstractHTTPHandler.do_request_

class HTTPCookieProcessor(BaseHandler):
    def __init__(self, cookiejar=None):
        import cookielib
        if cookiejar is None:
            cookiejar = cookielib.CookieJar()
        self.cookiejar = cookiejar

    def http_request(self, request):
        self.cookiejar.add_cookie_header(request)
        return request

    def http_response(self, request, response):
        self.cookiejar.extract_cookies(response, request)
        return response

    https_request = http_request
    https_response = http_response

class UnknownHandler(BaseHandler):
    def unknown_open(self, req):
        type = req.get_type()
        raise URLError('unknown url type: %s' % type)

def parse_keqv_list(l):
    """Parse list of key=value strings where keys are not duplicated."""
    parsed = {}
    for elt in l:
        k, v = elt.split('=', 1)
        if v[0] == '"' and v[-1] == '"':
            v = v[1:-1]
        parsed[k] = v
    return parsed

def parse_http_list(s):
    """Parse lists as described by RFC 2068 Section 2.

    In particular, parse comma-separated lists where the elements of
    the list may include quoted-strings.  A quoted-string could
    contain a comma.  A non-quoted string could have quotes in the
    middle.  Neither commas nor quotes count if they are escaped.
    Only double-quotes count, not single-quotes.
    """
    res = []
    part = ''

    escape = quote = False
    for cur in s:
        if escape:
            part += cur
            escape = False
            continue
        if quote:
            if cur == '\\':
                escape = True
                continue
            elif cur == '"':
                quote = False
            part += cur
            continue

        if cur == ',':
            res.append(part)
            part = ''
            continue

        if cur == '"':
            quote = True

        part += cur

    # append last part
    if part:
        res.append(part)

    return [part.strip() for part in res]

def _safe_gethostbyname(host):
    try:
        return socket.gethostbyname(host)
    except socket.gaierror:
        return None

class FileHandler(BaseHandler):
    # Use local file or FTP depending on form of URL
    def file_open(self, req):
        url = req.get_selector()
        if url[:2] == '//' and url[2:3] != '/' and (req.host and
                req.host != 'localhost'):
            req.type = 'ftp'
            return self.parent.open(req)
        else:
            return self.open_local_file(req)

    # names for the localhost
    names = None
    def get_names(self):
        if FileHandler.names is None:
            try:
                FileHandler.names = tuple(
                    socket.gethostbyname_ex('localhost')[2] +
                    socket.gethostbyname_ex(socket.gethostname())[2])
            except socket.gaierror:
                FileHandler.names = (socket.gethostbyname('localhost'),)
        return FileHandler.names

    # not entirely sure what the rules are here
    def open_local_file(self, req):
        import email.utils
        import mimetypes
        host = req.get_host()
        filename = req.get_selector()
        localfile = url2pathname(filename)
        try:
            stats = os.stat(localfile)
            size = stats.st_size
            modified = email.utils.formatdate(stats.st_mtime, usegmt=True)
            mtype = mimetypes.guess_type(filename)[0]
            headers = mimetools.Message(StringIO(
                'Content-type: %s\nContent-length: %d\nLast-modified: %s\n' %
                (mtype or 'text/plain', size, modified)))
            if host:
                host, port = splitport(host)
            if not host or \
                (not port and _safe_gethostbyname(host) in self.get_names()):
                if host:
                    origurl = 'file://' + host + filename
                else:
                    origurl = 'file://' + filename
                return addinfourl(open(localfile, 'rb'), headers, origurl)
        except OSError, msg:
            # urllib2 users shouldn't expect OSErrors coming from urlopen()
            raise URLError(msg)
        raise URLError('file not on local host')

class FTPHandler(BaseHandler):
    def ftp_open(self, req):
        import ftplib
        import mimetypes
        host = req.get_host()
        if not host:
            raise URLError('ftp error: no host given')
        host, port = splitport(host)
        if port is None:
            port = ftplib.FTP_PORT
        else:
            port = int(port)

        # username/password handling
        user, host = splituser(host)
        if user:
            user, passwd = splitpasswd(user)
        else:
            passwd = None
        host = unquote(host)
        user = user or ''
        passwd = passwd or ''

        try:
            host = socket.gethostbyname(host)
        except socket.error, msg:
            raise URLError(msg)
        path, attrs = splitattr(req.get_selector())
        dirs = path.split('/')
        dirs = map(unquote, dirs)
        dirs, file = dirs[:-1], dirs[-1]
        if dirs and not dirs[0]:
            dirs = dirs[1:]
        try:
            fw = self.connect_ftp(user, passwd, host, port, dirs, req.timeout)
            type = file and 'I' or 'D'
            for attr in attrs:
                attr, value = splitvalue(attr)
                if attr.lower() == 'type' and \
                   value in ('a', 'A', 'i', 'I', 'd', 'D'):
                    type = value.upper()
            fp, retrlen = fw.retrfile(file, type)
            headers = ""
            mtype = mimetypes.guess_type(req.get_full_url())[0]
            if mtype:
                headers += "Content-type: %s\n" % mtype
            if retrlen is not None and retrlen >= 0:
                headers += "Content-length: %d\n" % retrlen
            sf = StringIO(headers)
            headers = mimetools.Message(sf)
            return addinfourl(fp, headers, req.get_full_url())
        except ftplib.all_errors, msg:
            raise URLError, ('ftp error: %s' % msg), sys.exc_info()[2]

    def connect_ftp(self, user, passwd, host, port, dirs, timeout):
        fw = ftpwrapper(user, passwd, host, port, dirs, timeout,
                        persistent=False)
##        fw.ftp.set_debuglevel(1)
        return fw

class CacheFTPHandler(FTPHandler):
    # XXX would be nice to have pluggable cache strategies
    # XXX this stuff is definitely not thread safe
    def __init__(self):
        self.cache = {}
        self.timeout = {}
        self.soonest = 0
        self.delay = 60
        self.max_conns = 16

    def setTimeout(self, t):
        self.delay = t

    def setMaxConns(self, m):
        self.max_conns = m

    def connect_ftp(self, user, passwd, host, port, dirs, timeout):
        key = user, host, port, '/'.join(dirs), timeout
        if key in self.cache:
            self.timeout[key] = time.time() + self.delay
        else:
            self.cache[key] = ftpwrapper(user, passwd, host, port, dirs, timeout)
            self.timeout[key] = time.time() + self.delay
        self.check_cache()
        return self.cache[key]

    def check_cache(self):
        # first check for old ones
        t = time.time()
        if self.soonest <= t:
            for k, v in self.timeout.items():
                if v < t:
                    self.cache[k].close()
                    del self.cache[k]
                    del self.timeout[k]
        self.soonest = min(self.timeout.values())

        # then check the size
        if len(self.cache) == self.max_conns:
            for k, v in self.timeout.items():
                if v == self.soonest:
                    del self.cache[k]
                    del self.timeout[k]
                    break
            self.soonest = min(self.timeout.values())

    def clear_cache(self):
        for conn in self.cache.values():
            conn.close()
        self.cache.clear()
        self.timeout.clear()
�
zfc@s�dZdZdZdZddlZddlZejded�dd	d
dgZej	dej
�Zej	d
ej
�Zej	dej
ej
B�Zej	dej
ej
B�Zej	d�Zej	dej
�Zej	dej
�Zej	dej
�Zdd0d��YZdd1d��YZd�Zd�Zdd�Zdd�Zej	d�Zej	d�Zd�Zej	d �Zd!�Zej	d"ej
�Zej	d#ej
�Z ej	d$�Z!d%�Z"d&�Z#ddl$Z$e%d'kse&e$j'�dkr�e$j'ddkr�ddl(Z(d(Z)dZ*e(j(e$j'd)d*�\Z+Z,e&e,�d2kr}e)GHe$j-d)�nd3e+kd4e+kks�d5e+kr�d6e+kr�e)GHe$j-d)�nx�e+D]�\Z.Z/e.d+kr�e#Z0q�e.d-kr�eZ0q�e.d/krJye1e/�ZWq_e2e3fk
rFe)GHe$j-d)�q_Xq�e.d.kr�d)Z*q�q�We&e,�dkr�e$j4e$j5fZ6n<e&e,�d)kr�e,de$j5fZ6ne,de,d)fZ6e*r�e6e*fZ6ne0e6�ndS(7s�Mimification and unmimification of mail messages.

Decode quoted-printable parts of a mail message or encode using
quoted-printable.

Usage:
        mimify(input, output)
        unmimify(input, output, decode_base64 = 0)
to encode and decode respectively.  Input and output may be the name
of a file or an open file object.  Only a readline() method is used
on the input file, only a write() method is used on the output file.
When using file names, the input and output file names may be the
same.

Interactive usage:
        mimify.py -e [infile [outfile]]
        mimify.py -d [infile [outfile]]
to encode and decode respectively.  Infile defaults to standard
input and outfile to standard output.
i�s
ISO-8859-1s> i����Ns>the mimify module is deprecated; use the email package insteaditmimifytunmimifytmime_encode_headertmime_decode_headers.^content-transfer-encoding:\s*quoted-printables$^content-transfer-encoding:\s*base64s0^content-type:.*multipart/.*boundary="?([^;"
]*)s:^(content-type:.*charset=")(us-ascii|iso-8859-[0-9]+)(".*)s^-*
s=([0-9a-f][0-9a-f])s=\?iso-8859-1\?q\?([^? 	
]+)\?=s^subject:\s+re: tFilecBs eZdZd�Zd�ZRS(s{A simple fake file object that knows about limited read-ahead and
    boundaries.  The only supported method is readline().cCs||_||_d|_dS(N(tfiletboundarytNonetpeek(tselfRR((s/usr/lib64/python2.7/mimify.pyt__init__3s		cCs||jdk	rdS|jj�}|s,|S|jrx||jdkrU||_dS||jdkrx||_dSn|S(Nts
s--
(RRRtreadlineR(R	tline((s/usr/lib64/python2.7/mimify.pyR8s			(t__name__t
__module__t__doc__R
R(((s/usr/lib64/python2.7/mimify.pyR/s	t
HeaderFilecBseZd�Zd�ZRS(cCs||_d|_dS(N(RRR(R	R((s/usr/lib64/python2.7/mimify.pyR
Hs	cCs�|jdk	r$|j}d|_n|jj�}|s=|Stj|�rP|Sxk|jj�|_t|j�dks�|jddkr�|jddkr�|S||j}d|_qSWdS(Nit s	(RRRRthetmatchtlen(R	R
((s/usr/lib64/python2.7/mimify.pyRLs	&
(RRR
R(((s/usr/lib64/python2.7/mimify.pyRGs	cCs�d}d}xktj||�}|dkr1Pn||||jd�!tt|jd�d��}|jd�}qW|||S(s6Decode a single line of quoted-printable text to 8bit.RiiiN(t	mime_codetsearchRtstarttchrtinttgrouptend(R
tnewlinetpostres((s/usr/lib64/python2.7/mimify.pytmime_decode^scCs�d}d}x�tj||�}|dkr1Pn|jd�}dj|jd��}||||jd�!t|�}|jd�}qW|||S(sDecode a header line to 8bit.RiiRt_N(	t	mime_headRRRtjointsplitRR R(R
RRRR((s/usr/lib64/python2.7/mimify.pyRks$icCsGd}d}d}d}|jr=|jd tkr=t}nd}t|�}x|j�}	|	shdS|r�|	t|� |kr�|	t|�}	|}
nd}
t|	�}	tj|	�r�d}qRn|r�t	j|	�r�d}qRn|j
|
|	�|rtj|	�rd}ntj|	�}|rFd|j
d�}ntj|	�rRPqRqRW|rx|so|rxd}nx�|j�}	|	s�dStjtd|	�}	|r�|	t|� |kr�|	t|�}	|}
nd}
x�|rx|	|dkr|j
|
|	�d}d}	Pn|	|d	krt|j
|
|	�t||�}t|||�|j}	|	s�Pq�q�nPq�W|	r�|r�x]|	d
dkr�|	d
 }	|j�}
|
tt� tkr�|
tt�}
n|	|
}	q�Wt|	�}	n|	r%|r%|
r%ddl}|j|	�}	n|	r{|j
|
|	�q{q{WdS(
s?Convert a quoted-printable part of a MIME mail message to 8bit.iiRNis--s\1s--
s
i����s=
i����(RRtQUOTERRRRtqpRt	base64_retwritetrepltmpRRtretsubR"Rt
unmimify_partRR tbase64tdecodestring(tifiletofilet
decode_base64t	multiparttquoted_printablet	is_base64tis_repltprefixthfileR
tpreftmp_restnifileRR.((s/usr/lib64/python2.7/mimify.pyR-zs�							
c	Cs�t|�td�kr�t|�}t|�td�kr�||kr�ddl}|jj|�\}}|j||jj|d|��q�n|}t|�td�kr�t|d�}n|}t|d�}t	|||�|j
�dS(s>Convert quoted-printable parts of a MIME mail message to 8bit.Ri����Nt,tw(ttypetopentostpathR$trenameR#RRR-tflush(	tinfiletoutfileR2R0R@tdtfR1R;((s/usr/lib64/python2.7/mimify.pyR�s$)s[=-�]s[=?-�]cCsg|rt}nt}d}d}t|�dkrb|d dkrbdtd�j�}d}nxl|j||�}|dkr�Pn||||jd�!dt|jd��j�}|j	d�}qeW|||}d}xwt|�dkr^d	}x2||d
ks'||dd
kr4|d}qW|d}||| d}||}q�W||S(
sZCode a single line as quoted-printable.
    If header is set, quote some extra characters.RiisFrom s=%02xtFiiKiIt=s=
N(
tmime_header_chart	mime_charRtordtupperRRRRR(R
theadertregRRRti((s/usr/lib64/python2.7/mimify.pytmime_encode�s2	"	 '
s<([ 	(]|^)([-a-zA-Z0-9_+]*[-�][-a-zA-Z0-9_+-�]*)(?=[ 	)]|
)cCs�d}d}xytj||�}|dkr1Pnd||||jd�!|jd�tt|jd�d�f}|jd�}qW|||S(s.Code a single header line as quoted-printable.Ris%s%s%s=?%s?Q?%s?=iiN(tmime_headerRRRRtCHARSETRQR(R
RRR((s/usr/lib64/python2.7/mimify.pyRs"s^mime-version:s^content-transfer-encoding:s[-�]cCs3d}}}d}d}}}	g}
d}g}d}
t|�}x�|j�}|s_Pn|r~tj|�r~d}ntj|�r�d}ntj|�r�d}tj|�r�d}q�t	j|�r�d}q�nt
j|�}|r	d|jd�}ntj|�r"|}Pn|
j
|�qIWx;|j�}|sLPn|r�||dkrl|}
Pn||dkr�|}
Pq�n|r�|j
|�q6n|rx]|ddkr|d }|j�}|tt� tkr�|tt�}n||}q�Wt|�}n|j
|�|	sItj|�rId}	}qIn|s6t|�tkrmd}qmq6q6Wx�|
D]�}|r�t|�}ntj|�}|r
|	r�|jd	�j�d
kr
d|jd�t|jd�f}q
q
d
|jdd�}n|r[tj|�r[d}|r;|d}q[|rN|d}q[|d}n|j|�qxW|sx|r�|r�|jd�|jd�|	r�|jdt�q�|jd�n|r�|r�|jd�n|j|�x3|D]+}|rt|d�}n|j|�q�W|j|
�|
}x�|r.||dkr�xB|j�}|shdS|r�t|d�}n|j|�qRWn||dkr�t||�}t||d�|j}|s�Pn|j|�q9nx?|j�}|sdS|rt|d�}n|j|�q�Wq9WdS(s@Convert an 8bit part of a MIME mail message to quoted-printable.iRis--s--
s
i����s=
isus-asciis%s%s%sis%sus-ascii%ssContent-Transfer-Encoding: sbase64
squoted-printable
s7bit
sMime-Version: 1.0
sContent-Type: text/plain; s
charset="%s"
scharset="us-ascii"
s,Content-Transfer-Encoding: quoted-printable
N(RRRtiso_charRtmvRtcteR&R'R*RRtappendRR%R tMAXLENRtchrsettlowerRSR(RQRtmimify_partR(R0R1tis_mimethas_ctetis_qpR5R3tmust_quote_bodytmust_quote_headert
has_iso_charsRNt
header_endtmessagetmessage_endR8R
R:Rt
chrset_resR;((s/usr/lib64/python2.7/mimify.pyR[s�			












		
cCs�t|�td�kr�t|�}t|�td�kr�||kr�ddl}|jj|�\}}|j||jj|d|��q�n|}t|�td�kr�t|d�}n|}t|d�}t	||d�|j
�dS(s>Convert 8bit parts of a MIME mail message to quoted-printable.Ri����NR<R=i(R>R?R@RAR$RBR#RRR[RC(RDRER0R@RFRGR1R;((s/usr/lib64/python2.7/mimify.pyR�s$)t__main__s/Usage: mimify [-l len] -[ed] [infile [outfile]]isl:edbs-eRs-ds-bs-l(((iii(s-eR(s-dR(s-bR(s-dR(7RRXRSR%R+twarningstwarntDeprecationWarningt__all__tcompiletIR&R'tSR*RYRRR"R)RRR RR-RRKRJRQRRRRURVRTR[RtsysRRtargvtgetopttusageR2toptstargstexittotatencodeRt
ValueErrort
OverflowErrortstdintstdouttencode_args(((s/usr/lib64/python2.7/mimify.pyt<module>s�	
	
	U	 		�	4		
�
zfc@sdZddlZddlZddlmZyddlmZWn!ek
reddlmZnXddgZ	ddd��YZ
eed�Z
ed	kreej�d
kr�e
�Zn"ejd
Ze
ee�e�Zx,ej�Zer
dee�GHq�Pq�WndS(
s8A lexical analyzer class for simple shell-like syntaxes.i����N(tdeque(tStringIOtshlextsplitcBszeZdZdded�Zd�Zdd�Zd�Zd�Z	d�Z
d�Zddd�Zd	�Z
d
�ZRS(s8A lexical analyzer class for simple shell-like syntaxes.cCs<t|t�rt|�}n|dk	r?||_||_ntj|_d|_||_|rod|_	n	d|_	d|_
d|_|jr�|jd7_nd|_t
|_d|_d|_d|_d	|_t�|_d
|_d|_d|_t�|_d|_|jr8d|j|jfGHndS(
Ntt#t?abcdfeghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_s>�����������������������������������������������������������s 	
s'"s\t"t iisshlex: reading from %s, line %d(t
isinstancet
basestringRtNonetinstreamtinfiletsyststdintposixteoft
commenterst	wordcharst
whitespacetFalsetwhitespace_splittquotestescapet
escapedquoteststateRtpushbacktlinenotdebugttokent	filestacktsource(tselfRR
R((s/usr/lib64/python2.7/shlex.pyt__init__s<																		cCs5|jdkr!dt|�GHn|jj|�dS(s:Push a token onto the stack popped by the get_token methodisshlex: pushing token N(RtreprRt
appendleft(R!ttok((s/usr/lib64/python2.7/shlex.pyt
push_token;scCs�t|t�rt|�}n|jj|j|j|jf�||_||_d|_|jr�|dk	r�d|jfGHq�d|jfGHndS(s9Push an input source onto the lexer's input source stack.isshlex: pushing to file %ssshlex: pushing to stream %sN(
R	R
RRR$R
RRRR(R!t	newstreamtnewfile((s/usr/lib64/python2.7/shlex.pytpush_sourceAs"				cCs\|jj�|jj�\|_|_|_|jrOd|j|jfGHnd|_dS(sPop the input source stack.sshlex: popping to %s, line %dRN(RtcloseRtpopleftR
RRR(R!((s/usr/lib64/python2.7/shlex.pyt
pop_sourceOs
!	cCs/|jr=|jj�}|jdkr9dt|�GHn|S|j�}|jdk	r�x\||jkr�|j|j��}|r�|\}}|j||�n|j	�}q[Wnx9||j
kr�|js�|j
S|j�|j	�}q�W|jdkr+||j
kr#dt|�GHq+dGHn|S(sBGet a token from the input stream (or from stack if it's nonempty)isshlex: popping token s
shlex: token=sshlex: token=EOFN(
RR+RR#t
read_tokenR Rt
sourcehookR)t	get_tokenRRR,(R!R%trawtspecR(R'((s/usr/lib64/python2.7/shlex.pyR/Xs.		
cCs}t}d}xtr|jjd�}|dkrF|jd|_n|jdkrxdGt|j�GdGt|�GHn|jdkr�d|_	Pq|jdkr�|s�d|_Pq||j
kr�|jdkr�d	GHn|j	s�|jr|rPq�qq||jkr-|jj
�|jd|_q|jrW||jkrWd
}||_q||jkr{||_	d
|_q||jkr�|js�||_	n||_q|jr�||_	d
|_q||_	|j	s�|jr|rPqqq|j|jkr�t}|s5|jdkr)dGHntd�n||jkrv|jsj|j	||_	d|_Pq�d
|_q|jr�||jkr�|j|jkr�|j}||_q|j	||_	q|j|jkr_|s|jdkr�d
GHntd�n||jkrC||jkrC||krC|j	|j|_	n|j	||_	||_q|jd
kr|s�d|_Pq||j
kr�|jdkr�dGHnd|_|j	s�|jr|rPqqq||jkr5|jj
�|jd|_|jrd|_|j	s(|jr|rPq2qqq|jrY||jkrY||_q|jr�||jkr�d
}||_q||jks�||jks�|jr�|j	||_	q|jj|�|jdkr�dGHnd|_|j	s|jr|rPqqqqW|j	}d|_	|jrJ|rJ|dkrJd}n|jdkry|rqdt|�GHqydGHn|S(NRis
isshlex: in statesI see character:Ris+shlex: I see whitespace in whitespace statetas shlex: I see EOF in quotes statesNo closing quotations shlex: I see EOF in escape statesNo escaped characters%shlex: I see whitespace in word states&shlex: I see punctuation in word statesshlex: raw token=sshlex: raw token=EOF(RtTrueRtreadRRR#RRRRRRtreadlineRRRRt
ValueErrorRRR$(R!tquotedtescapedstatetnextchartresult((s/usr/lib64/python2.7/shlex.pyR-xs�			
										
					
			cCs|ddkr |dd!}nt|jt�rltjj|�rltjjtjj|j�|�}n|t|d�fS(s(Hook called on a filename to be sourced.iRii����tr(	R	R
R
tostpathtisabstjointdirnametopen(R!R(((s/usr/lib64/python2.7/shlex.pyR.�s
%'cCs>|dkr|j}n|dkr0|j}nd||fS(s<Emit a C-compiler-like, Emacs-friendly error-message leader.s"%s", line %d: N(RR
R(R!R
R((s/usr/lib64/python2.7/shlex.pyterror_leaders
cCs|S(N((R!((s/usr/lib64/python2.7/shlex.pyt__iter__	scCs(|j�}||jkr$t�n|S(N(R/Rt
StopIteration(R!R((s/usr/lib64/python2.7/shlex.pytnexts	N(t__name__t
__module__t__doc__RRR"R&R)R,R/R-R.RBRCRE(((s/usr/lib64/python2.7/shlex.pyRs$				 	�			cCs7t|d|�}t|_|s-d|_nt|�S(NRR(RR3RRtlist(tstcommentsRtlex((s/usr/lib64/python2.7/shlex.pyRs
	t__main__isToken: ((RHtos.pathR<RtcollectionsRt	cStringIORtImportErrort__all__RRR3RRFtlentargvtlexertfileRAR/tttR#(((s/usr/lib64/python2.7/shlex.pyt<module>s(
�
�
zfc@sadZddlZddddgZiZdZd�Zd	�Zd
�Zd�Zd�Z	dS(
s�Filename matching with shell patterns.

fnmatch(FILENAME, PATTERN) matches according to the local convention.
fnmatchcase(FILENAME, PATTERN) always takes case in account.

The functions operate by translating the pattern into a regular
expression.  They cache the compiled regular expressions for speed.

The function translate(PATTERN) returns a regular expression
corresponding to PATTERN.  (It does not compile it.)
i����Ntfiltertfnmatchtfnmatchcaset	translateidcCstj�dS(sClear the pattern cacheN(t_cachetclear(((s/usr/lib64/python2.7/fnmatch.pyt_purgescCs=ddl}|jj|�}|jj|�}t||�S(s�Test whether FILENAME matches PATTERN.

    Patterns are Unix shell style:

    *       matches everything
    ?       matches any single character
    [seq]   matches any character in seq
    [!seq]  matches any char not in seq

    An initial period in FILENAME is not special.
    Both FILENAME and PATTERN are first case-normalized
    if the operating system requires it.
    If you don't want this, use fnmatchcase(FILENAME, PATTERN).
    i����N(tostpathtnormcaseR(tnametpatR((s/usr/lib64/python2.7/fnmatch.pyRsc	Csddl}ddl}g}|jj|�}yt|}WnStk
r�t|�}tt�tkrytj	�nt
j|�t|<}nX|j}|j|kr�xf|D]"}||�r�|j
|�q�q�Wn9x6|D].}||jj|��r�|j
|�q�q�W|S(s2Return the subset of the list NAMES that match PATi����N(Rt	posixpathRR	RtKeyErrorRtlent	_MAXCACHERtretcompiletmatchtappend(	tnamesRRRtresulttre_pattresRR
((s/usr/lib64/python2.7/fnmatch.pyR-s&

	

cCswyt|}WnStk
rct|�}tt�tkrItj�ntj|�t|<}nX|j|�dk	S(s�Test whether FILENAME matches PATTERN, including case.

    This is a version of fnmatch() which doesn't case-normalize
    its arguments.
    N(
RR
RRRRRRRtNone(R
RRR((s/usr/lib64/python2.7/fnmatch.pyREs

cCs�dt|�}}d}x�||kr�||}|d}|dkrU|d}q|dkrn|d}q|dkr�|}||kr�||d	kr�|d}n||kr�||d
kr�|d}nx*||kr�||d
kr�|d}q�W||kr|d}q�|||!jdd
�}|d}|dd	kr\d|d}n|ddkryd|}nd||f}q|tj|�}qW|dS(sfTranslate a shell PATTERN to a regular expression.

    There is no way to quote meta-characters.
    itit*s.*t?t.t[t!t]s\[s\s\\t^s%s[%s]s\Z(?ms)(RtreplaceRtescape(RtitnRtctjtstuff((s/usr/lib64/python2.7/fnmatch.pyRUs8








(
t__doc__Rt__all__RRRRRRR(((s/usr/lib64/python2.7/fnmatch.pyt<module>s				�
zfc	@s�dZddlZejdkZddlZddlZddlZddlZddlZddl	Z	de
fd��YZer�ddlZddl
Z
ddlZdfd��YZdfd	��YZnoddlZeed
�ZyddlZWnek
reZnXddlZddlZeedd�Zd
ddddddgZer�ddlmZmZmZmZmZm Z m!Z!m"Z"ej#ddddddddg�nyej$d�Z%Wn
dZ%nXgZ&d�Z'dZ(dZ)d �Z*d!�Z+d"�Z,d#�Z-d$�Z.d%�Z/d
e0fd&��YZ1d'�Z2d(�Z3e4d)kr�er|e3�ne2�ndS(*s�Subprocesses with accessible I/O streams

This module allows you to spawn processes, connect to their
input/output/error pipes, and obtain their return codes.

For a complete description of this module see the Python documentation.

Main API
========
call(...): Runs a command, waits for it to complete, then returns
    the return code.
check_call(...): Same as call() but raises CalledProcessError()
    if return code is not 0
check_output(...): Same as check_call() but returns the contents of
    stdout instead of a return code
Popen(...): A class for flexibly executing a command in a new process

Constants
---------
PIPE:    Special value that indicates a pipe should be created
STDOUT:  Special value that indicates that stderr should go to stdout
i����Ntwin32tCalledProcessErrorcBs#eZdZdd�Zd�ZRS(s�This exception is raised when a process run by check_call() or
    check_output() returns a non-zero exit status.

    Attributes:
      cmd, returncode, output
    cCs||_||_||_dS(N(t
returncodetcmdtoutput(tselfRRR((s"/usr/lib64/python2.7/subprocess.pyt__init__3s		cCsd|j|jfS(Ns-Command '%s' returned non-zero exit status %d(RR(R((s"/usr/lib64/python2.7/subprocess.pyt__str__7sN(t__name__t
__module__t__doc__tNoneRR(((s"/usr/lib64/python2.7/subprocess.pyR,stSTARTUPINFOcBs&eZdZdZdZdZdZRS(iN(RR	tdwFlagsRt	hStdInputt
hStdOutputt	hStdErrortwShowWindow(((s"/usr/lib64/python2.7/subprocess.pyR?s
t
pywintypescBseZeZRS((RR	tIOErrorterror(((s"/usr/lib64/python2.7/subprocess.pyREstpolltPIPE_BUFitPopentPIPEtSTDOUTtcallt
check_calltcheck_output(tCREATE_NEW_CONSOLEtCREATE_NEW_PROCESS_GROUPtSTD_INPUT_HANDLEtSTD_OUTPUT_HANDLEtSTD_ERROR_HANDLEtSW_HIDEtSTARTF_USESTDHANDLEStSTARTF_USESHOWWINDOWRRRR R!R"R#R$tSC_OPEN_MAXicCs_xXtD]O}|jdtj�}|dk	rytj|�WqWtk
rSqWXqqWdS(Nt
_deadstate(t_activet_internal_polltsystmaxintRtremovet
ValueError(tinsttres((s"/usr/lib64/python2.7/subprocess.pyt_cleanupks
i����cGsVxOtrQy||�SWqttfk
rM}|jtjkrGqn�qXqWdS(N(tTruetOSErrorRterrnotEINTR(tfunctargste((s"/usr/lib64/python2.7/subprocess.pyt_eintr_retry_callzs	cCs�i	dd6dd6dd6dd6d	d
6dd6d
d6dd6dd6}g}xP|j�D]B\}}ttj|�}|dkrX|jd||�qXqXWttjd�dkr�|jd�nx"tjD]}|jd|�q�W|S(snReturn a list of command-line arguments reproducing the current
    settings in sys.flags and sys.warnoptions.tdtdebugtOtoptimizetBtdont_write_bytecodetstno_user_sitetStno_sitetEtignore_environmenttvtverbosetbt
bytes_warningt3tpy3k_warningit-thash_randomizations-Rs-W(titemstgetattrR)tflagstappendtwarnoptions(tflag_opt_mapR5tflagtoptRD((s"/usr/lib64/python2.7/subprocess.pyt_args_from_interpreter_flags�s(
cOst||�j�S(s�Run command with arguments.  Wait for command to complete, then
    return the returncode attribute.

    The arguments are the same as for the Popen constructor.  Example:

    retcode = call(["ls", "-l"])
    (Rtwait(t	popenargstkwargs((s"/usr/lib64/python2.7/subprocess.pyR�scOsSt||�}|rO|jd�}|dkr=|d}nt||��ndS(sSRun command with arguments.  Wait for command to complete.  If
    the exit code was zero then return, otherwise raise
    CalledProcessError.  The CalledProcessError object will have the
    return code in the returncode attribute.

    The arguments are the same as for the Popen constructor.  Example:

    check_call(["ls", "-l"])
    R5iN(RtgetRR(RVRWtretcodeR((s"/usr/lib64/python2.7/subprocess.pyR�s

cOs�d|krtd��ntdt||�}|j�\}}|j�}|r�|jd�}|dkr||d}nt||d|��n|S(sRun command with arguments and return its output as a byte string.

    If the exit code was non-zero it raises a CalledProcessError.  The
    CalledProcessError object will have the return code in the returncode
    attribute and output in the output attribute.

    The arguments are the same as for the Popen constructor.  Example:

    >>> check_output(["ls", "-l", "/dev/null"])
    'crw-rw-rw- 1 root root 1, 3 Oct 18  2007 /dev/null\n'

    The stdout argument is not allowed as it is used internally.
    To capture standard error in the result, use stderr=STDOUT.

    >>> check_output(["/bin/sh", "-c",
    ...               "ls -l non_existent_file ; exit 0"],
    ...              stderr=STDOUT)
    'ls: non_existent_file: No such file or directory\n'
    tstdouts3stdout argument not allowed, it will be overridden.R5iRN(R,RRtcommunicateRRXRR(RVRWtprocessRt
unused_errRYR((s"/usr/lib64/python2.7/subprocess.pyR�s
cCsGg}t}x+|D]#}g}|r5|jd�nd|kpQd|kpQ|}|rj|jd�nx�|D]�}|dkr�|j|�qq|dkr�|jdt|�d�g}|jd�qq|r�|j|�g}n|j|�qqW|r|j|�n|r|j|�|jd�qqWdj|�S(s�
    Translate a sequence of arguments into a command line
    string, using the same rules as the MS C runtime:

    1) Arguments are delimited by white space, which is either a
       space or a tab.

    2) A string surrounded by double quotation marks is
       interpreted as a single argument, regardless of white space
       contained within.  A quoted string can be embedded in an
       argument.

    3) A double quotation mark preceded by a backslash is
       interpreted as a literal double quotation mark.

    4) Backslashes are interpreted literally, unless they
       immediately precede a double quotation mark.

    5) If backslashes immediately precede a double quotation mark,
       every pair of backslashes is interpreted as a literal
       backslash.  If the number of backslashes is odd, the last
       backslash escapes the next double quotation mark as
       described in rule 3.
    t s	t"s\is\"t(tFalseROtlentextendtjoin(tseqtresultt	needquotetargtbs_buftc((s"/usr/lib64/python2.7/subprocess.pytlist2cmdline�s4


	
cBs�eZdZeZdd!d!d!d!d!eed!d!ed!dd�
Zd�Zej	d�Z
d!d�Zd�Ze
r�d�Zd�Zd	�Zd
�Zd!ejejejd�Zd�Zd
�Zd�Zd�Zd�ZeZn�d�Zed�Zd�Zd�Z e!r)e!j"�Z#nde$fd��YZ%e%�Z#d�Ze&j'e&j(e&j)e&j*e&j+e&j,d�Z-d!e&j.e&j/e&j0e1j2d�Zd�Zd�Zd�Z3d�Z4d�Zd�Zd �ZRS("s� Execute a child program in a new process.

    For a complete description of the arguments see the Python documentation.

    Arguments:
      args: A string, or a sequence of program arguments.

      bufsize: supplied as the buffering argument to the open() function when
          creating the stdin/stdout/stderr pipe file objects

      executable: A replacement program to execute.

      stdin, stdout and stderr: These specify the executed programs' standard
          input, standard output and standard error file handles, respectively.

      preexec_fn: (POSIX only) An object to be called in the child process
          just before the child is executed.

      close_fds: Controls closing or inheriting of file descriptors.

      shell: If true, the command will be executed through the shell.

      cwd: Sets the current directory before the child is executed.

      env: Defines the environment variables for the new process.

      universal_newlines: If true, use universal line endings for file
          objects stdin, stdout and stderr.

      startupinfo and creationflags (Windows only)

    Attributes:
        stdin, stdout, stderr, pid, returncode
    icCst�t|ttf�s+td��ntr�|d
k	rLtd��n|r�|d
k	sv|d
k	sv|d
k	r�td��q�n6|
d
k	r�td��n|dkr�td��nd
|_d
|_	d
|_
d
|_d
|_||_
|j|||�\\}}}}}}}yA|j|||||
|||
||	|||||||�Wn{tk
r�tj�\}}}xF|D]>}y$tr�|j�n
tj|�Wq�tk
r�q�Xq�W|||�nXtr^|d
k	r
tj|j�d�}n|d
k	r4tj|j�d�}n|d
k	r^tj|j�d�}q^n|d
k	r�tj|d|�|_n|d
k	r�|r�tj|d|�|_	q�tj|d	|�|_	n|d
k	r|r�tj|d|�|_
qtj|d	|�|_
nd
S(sCreate new Popen instance.sbufsize must be an integers0preexec_fn is not supported on Windows platformssSclose_fds is not supported on Windows platforms if you redirect stdin/stdout/stderrs2startupinfo is only supported on Windows platformsis4creationflags is only supported on Windows platformstwbtrUtrbN(R/t
isinstancetinttlongt	TypeErrort	mswindowsRR,tstdinRZtstderrtpidRtuniversal_newlinest_get_handlest_execute_childt	ExceptionR)texc_infotClosetostclosetEnvironmentErrortmsvcrttopen_osfhandletDetachtfdopen(RR5tbufsizet
executableRtRZRut
preexec_fnt	close_fdstshelltcwdtenvRwtstartupinfot
creationflagstp2creadtp2cwritetc2preadtc2pwriteterrreadterrwritetto_closetexc_typet	exc_valuet	exc_tracetfd((s"/usr/lib64/python2.7/subprocess.pyRNsl						-	



cCs(|jdd�}|jdd�}|S(Ns
s
s
(treplace(Rtdata((s"/usr/lib64/python2.7/subprocess.pyt_translate_newlines�scCsL|js
dS|jd|�|jdkrHtdk	rHtj|�ndS(NR&(t_child_createdR(RRR'RO(Rt_maxint((s"/usr/lib64/python2.7/subprocess.pyt__del__�s
	cCs |j|j|jgjd�dkrd}d}|jr�|r�y|jj|�Wq�tk
r�}|jtjkr�|jtj	kr��q�q�Xn|jj
�nV|jr�t|jj�}|jj
�n+|jr�t|jj�}|jj
�n|j
�||fS|j|�S(sfInteract with process: Send data to stdin.  Read data from
        stdout and stderr, until end-of-file is reached.  Wait for
        process to terminate.  The optional input argument should be a
        string to be sent to the child process, or None, if no data
        should be sent to the child.

        communicate() returns a tuple (stdout, stderr).iN(RtRZRutcountRtwriteRR2tEPIPEtEINVALR~R7treadRUt_communicate(RtinputRZRuR6((s"/usr/lib64/python2.7/subprocess.pyR[�s('	$
		

cCs
|j�S(sSCheck if child process has terminated. Set and return returncode
        attribute.(R((R((s"/usr/lib64/python2.7/subprocess.pyR�scCs5t�}|dkr7|dkr7|dkr7d|fSd\}}d\}}d\}	}
|dkr�tjtj�}|dkrtjdd�\}}qnc|tkr�tjdd�\}}n<t|tt	f�r�t
j|�}nt
j|j��}|j
|�}|j|�|tkr>|j|�n|dkr�tjtj�}|dkr�tjdd�\}}q�nc|tkr�tjdd�\}}n<t|tt	f�r�t
j|�}nt
j|j��}|j
|�}|j|�|tkr!|j|�n|dkritjtj�}
|
dkr�tjdd�\}}
q�nx|tkr�tjdd�\}	}
nQ|tkr�|}
n<t|tt	f�r�t
j|�}
nt
j|j��}
|j
|
�}
|j|
�|tkr|j|	�n|||||	|
f|fS(s|Construct and return tuple with IO objects:
            p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite
            iN(NNNNNN(NN(NN(NN(tsetRt_subprocesstGetStdHandleRt
CreatePipeRRoRpRqR�t
get_osfhandletfilenot_make_inheritabletaddR R!R(RRtRZRuR�R�R�R�R�R�R�t_((s"/usr/lib64/python2.7/subprocess.pyRx�sd	$


	
cCs+tjtj�|tj�ddtj�S(s2Return a duplicate of handle, which is inheritableii(R�tDuplicateHandletGetCurrentProcesstDUPLICATE_SAME_ACCESS(Rthandle((s"/usr/lib64/python2.7/subprocess.pyR�6scCs�tjjtjjtjd��d�}tjj|�s�tjjtjjtj�d�}tjj|�s�t	d��q�n|S(s,Find and return absolut path to w9xpopen.exeisw9xpopen.exesZCannot locate w9xpopen.exe, which is needed for Popen to work with your shell or platform.(
R}tpathRdtdirnameR�tGetModuleFileNametexistsR)texec_prefixtRuntimeError(Rtw9xpopen((s"/usr/lib64/python2.7/subprocess.pyt_find_w9xpopen=s			c
st|tj�s!t|�}n|dkr9t�}nd|||fkr~|jtjO_||_	||_
||_n|
r(|jtjO_tj
|_tjjdd�}dj||�}tj�dks�tjj|�j�dkr(|j�}d||f}|	tjO}	q(n�fd�}zjy>tj||ddt|�|	|||�	\}}}}Wn%tjk
r�}t|j��nXWd|dk	r�||�n|dk	r�||�n|dk	r�||�nXt|_ ||_!||_"|j#�dS(	s$Execute program (MS Windows version)tCOMSPECscmd.exes
{} /c "{}"I�scommand.coms"%s" %scs|j��j|�dS(N(R|R+(R�(R�(s"/usr/lib64/python2.7/subprocess.pyt_close_in_parentws
N($RottypestStringTypesRkRRR
R�R#RRRR$R"RR}tenvironRXtformatt
GetVersionR�tbasenametlowerR�Rt
CreateProcessRpRRtWindowsErrorR5R0R�t_handleRvR|(RR5R�R�R�R�R�RwR�R�R�R�R�R�R�R�R�R�tcomspecR�R�thpthtRvttidR6((R�s"/usr/lib64/python2.7/subprocess.pyRyNsR		


			cCsF|jdkr?||jd�|kr?||j�|_q?n|jS(s�Check if child process has terminated.  Returns returncode
            attribute.

            This method is called by __del__, so it can only refer to objects
            in its local scope.

            iN(RRR�(RR&t_WaitForSingleObjectt_WAIT_OBJECT_0t_GetExitCodeProcess((s"/usr/lib64/python2.7/subprocess.pyR(�scCsD|jdkr=tj|jtj�tj|j�|_n|jS(sOWait for child process to terminate.  Returns returncode
            attribute.N(RRR�tWaitForSingleObjectR�tINFINITEtGetExitCodeProcess(R((s"/usr/lib64/python2.7/subprocess.pyRU�s

cCs|j|j��dS(N(ROR�(Rtfhtbuffer((s"/usr/lib64/python2.7/subprocess.pyt
_readerthread�scCs�d}d}|jrYg}tjd|jd|j|f�}|jt�|j�n|jr�g}tjd|jd|j|f�}|jt�|j�n|j	r%|dk	ry|j	j
|�Wqtk
r}|jtj
kr�q|jtjkrq�qXn|j	j�n|jr;|j�n|jrQ|j�n|dk	rj|d}n|dk	r�|d}n|jr�ttd�r�|r�|j|�}n|r�|j|�}q�n|j�||fS(NttargetR5itnewlines(RRZt	threadingtThreadR�t	setDaemonR0tstartRuRtR�RR2R�R�R~RdRwthasattrtfileR�RU(RR�RZRut
stdout_threadt
stderr_threadR6((s"/usr/lib64/python2.7/subprocess.pyR��sP	

	

	
	
	



cCs�|tjkr|j�ne|tjkrDtj|jtj�n=|tjkrltj|jtj�ntdj	|���dS(s)Send a signal to the process
            sUnsupported signal: {}N(
tsignaltSIGTERMt	terminatetCTRL_C_EVENTR}tkillRvtCTRL_BREAK_EVENTR,R�(Rtsig((s"/usr/lib64/python2.7/subprocess.pytsend_signal�s
cCsvytj|jd�WnXtk
rq}|jdkr>�ntj|j�}|tjkre�n||_nXdS(s#Terminates the process
            iiN(R�tTerminateProcessR�R1twinerrorR�tSTILL_ACTIVER(RR6trc((s"/usr/lib64/python2.7/subprocess.pyR�scCs�t�}d\}}d\}}d\}	}
|dkr<n^|tkrp|j�\}}|j||f�n*t|ttf�r�|}n|j�}|dkr�n^|tkr�|j�\}}|j||f�n*t|ttf�r�|}n|j�}|dkrn�|tkrJ|j�\}	}
|j|	|
f�n]|t	kr}|dk	rk|}
q�t
jj�}
n*t|ttf�r�|}
n|j�}
|||||	|
f|fS(s|Construct and return tuple with IO objects:
            p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite
            N(NN(NN(NN(R�RRtpipe_cloexectupdateRoRpRqR�RR)t
__stdout__(RRtRZRuR�R�R�R�R�R�R�((s"/usr/lib64/python2.7/subprocess.pyRxsF					cCs~y
tj}Wntk
r&d}nXtj|tj�}|r_tj|tj||B�ntj|tj||@�dS(Ni(tfcntlt
FD_CLOEXECtAttributeErrortF_GETFDtF_SETFD(RR�tcloexectcloexec_flagtold((s"/usr/lib64/python2.7/subprocess.pyt_set_cloexec_flagNs


cCs6tj�\}}|j|�|j|�||fS(s#Create a pipe with FDs set CLOEXEC.(R}tpipeR�(Rtrtw((s"/usr/lib64/python2.7/subprocess.pyR�[s

cCs�ttd�r6tjd|�tj|dt�nGxDtdt�D]3}||kr^qFnytj|�WqFqFXqFWdS(Nt
closerangeii(R�R}R�tMAXFDtxrangeR~(Rtbutti((s"/usr/lib64/python2.7/subprocess.pyt
_close_fdsgst_noop_context_managercBseZd�Zd�ZRS(cCsdS(N((R((s"/usr/lib64/python2.7/subprocess.pyt	__enter__~R`cGsdS(N((RR5((s"/usr/lib64/python2.7/subprocess.pyt__exit__R`(RR	R�R(((s"/usr/lib64/python2.7/subprocess.pyR�{s	c!s�t|tj�r|g}nt|�}|
rVddg|}|rV||d<qVn|dkro|d}n�fd�}�j�\}}zz��j�tj�}tj	�WdQXyt
j��_Wn|r�tj
�n�nXt�_�jdkr0zy�|
dk	r)t
j|
�n|dk	rEt
j|�n|dk	rat
j|�nt
j|�|dkr�t
j|�}n|dks�|dkr�t
j|�}n�fd�}||d�||d�||d�dh}xL|||gD];}||kr|dkrt
j|�|j|�qqW|dk	r`t
j|�n|rp|�n|r��jd	|�n|dkr�t
j||�nt
j|||�Wn\tj�\}}}tj|||�}d
j|�|_t
j|tj|��nXWdt
j d�Xn|rCtj
�nWdt
j|�Xt!t
j"|d�}g}x,|r�|j#|�t!t
j"|d�}qsWd
j|�}Wd|dk	r�|
dk	r�||�n|dk	r�|dk	r�||�n|dk	r!|dk	r!||�nt
j|�X|d
kr�yt!t
j$�jd�Wn+t%k
r�}|j&t&j'kr��q�nXtj(|�} | �ndS(
sExecute program (POSIX version)s/bin/shs-cicstj|��j|�dS(N(R}R~R+(R�(R�(s"/usr/lib64/python2.7/subprocess.pyR��s
NicsB||kr�j|t�n|dk	r>tj||�ndS(N(R�RaRR}tdup2(taRF(R(s"/usr/lib64/python2.7/subprocess.pyt_dup2�siR�R`i�i()RoR�R�tlistRR�t_disabling_gc_locktgct	isenabledtdisableR}tforkRvtenableR0R�R~tdupR�tchdirR�texecvptexecvpeR)R{t	tracebacktformat_exceptionRdtchild_tracebackR�tpickletdumpst_exitR7R�ROtwaitpidR1R2tECHILDtloads(!RR5R�R�R�R�R�RwR�R�R�R�R�R�R�R�R�R�R�terrpipe_readt
errpipe_writetgc_was_enabledRtclosedR�R�R�ttbt	exc_linesR�tpickle_bitsR6tchild_exception((RR�s"/usr/lib64/python2.7/subprocess.pyRy�s�


	



	

		!	




cCsl||�r||�|_nI||�r=||�|_n+||�r\||�|_ntd��dS(NsUnknown child exit status!(RR�(Rtstst_WIFSIGNALEDt	_WTERMSIGt
_WIFEXITEDt_WEXITSTATUSt_WIFSTOPPEDt	_WSTOPSIG((s"/usr/lib64/python2.7/subprocess.pyt_handle_exitstatussc	Cs�|jdkr�y;||j|�\}}||jkrI|j|�nWq�|k
r�}|dk	rt||_n|j|kr�d|_q�q�Xn|jS(s�Check if child process has terminated.  Returns returncode
            attribute.

            This method is called by __del__, so it cannot reference anything
            outside of the local scope (nor can any methods it calls).

            iN(RRRvR'R2(	RR&t_waitpidt_WNOHANGt	_os_errort_ECHILDRvR R6((s"/usr/lib64/python2.7/subprocess.pyR(+s	cCs�x�|jdkr�y"ttj|jd�\}}Wn:tk
rp}|jtjkr^�n|j}d}nX||jkr|j	|�qqW|jS(sOWait for child process to terminate.  Returns returncode
            attribute.iN(
RRR7R}RRvR1R2RR'(RRvR R6((s"/usr/lib64/python2.7/subprocess.pyRUFs"	
cCs�|jr/|jj�|s/|jj�q/ntrM|j|�\}}n|j|�\}}|dk	r�dj|�}n|dk	r�dj|�}n|jr�t	t
d�r�|r�|j|�}n|r�|j|�}q�n|j�||fS(NR`R�(
RttflushR~t	_has_pollt_communicate_with_pollt_communicate_with_selectRRdRwR�R�R�RU(RR�RZRu((s"/usr/lib64/python2.7/subprocess.pyR�[s$	

cs>d}d}i�i}tj����fd�}��fd�}|jrm|rm||jtj�ntjtjB}|jr�||j|�g||jj�<}n|j	r�||j	|�g||j	j�<}nd}xH�r3y�j�}	Wn5tj
k
r9}
|
jdtj
kr3q�n�nXx�|	D]�\}}|tj@r�|||t!}
y|tj||
�7}Wn5tk
r�}
|
jtjkr�||�q��q,X|t|�kr,||�q,qA||@r"tj|d�}|s||�n||j|�qA||�qAWq�W||fS(Ncs*�j|j�|�|�|j�<dS(N(tregisterR�(tfile_objt	eventmask(tfd2filetpoller(s"/usr/lib64/python2.7/subprocess.pytregister_and_append�scs,�j|��|j��j|�dS(N(t
unregisterR~tpop(R�(R3R4(s"/usr/lib64/python2.7/subprocess.pytclose_unregister_and_remove�s
ii(RtselectRRttPOLLOUTtPOLLINtPOLLPRIRZR�RuRR5R2R3t	_PIPE_BUFR}R�R1R�RbR�RO(RR�RZRut	fd2outputR5R8tselect_POLLIN_POLLPRItinput_offsettreadyR6R�tmodetchunkR�((R3R4s"/usr/lib64/python2.7/subprocess.pyR.|sT			



cCs�g}g}d}d}|jr:|r:|j|j�n|jr\|j|j�g}n|jr~|j|j�g}nd}x�|s�|r�y"tj||g�\}}}	Wn5tjk
r�}
|
jdtj	kr�q�n�nX|j|kr�|||t
!}ytj|jj
�|�}WnHtk
rv}
|
jtjkrp|jj�|j|j�q��q�X||7}|t|�kr�|jj�|j|j�q�n|j|krtj|jj
�d�}
|
dkr|jj�|j|j�n|j|
�n|j|kr�tj|jj
�d�}
|
dkrr|jj�|j|j�n|j|
�q�q�W||fS(NiiR`(RRtRORZRuR9RR5R2R3R=R}R�R�R1R�R~R+RbR�(RR�tread_sett	write_setRZRuR@trlisttwlisttxlistR6RCt
bytes_writtenR�((s"/usr/lib64/python2.7/subprocess.pyR/�s\				"




cCstj|j|�dS(s)Send a signal to the process
            N(R}R�Rv(RR�((s"/usr/lib64/python2.7/subprocess.pyR��scCs|jtj�dS(s/Terminate the process with SIGTERM
            N(R�R�R�(R((s"/usr/lib64/python2.7/subprocess.pyR��scCs|jtj�dS(s*Kill the process with SIGKILL
            N(R�R�tSIGKILL(R((s"/usr/lib64/python2.7/subprocess.pyR��sN(5RR	R
RaR�RRR�R)R*R�R[RRsRxR�R�RyR�R�t
WAIT_OBJECT_0R�R(RUR�R�R�R�R�R0R�R�R�R�tLockRtobjectR�R}tWIFSIGNALEDtWTERMSIGt	WIFEXITEDtWEXITSTATUSt
WIFSTOPPEDtWSTOPSIGR'RtWNOHANGRR2RR.R/(((s"/usr/lib64/python2.7/subprocess.pyR)s`"			^	"		F			Q	
		;				4
				�			!	=	9		cCs(tdgdt�j�d}dGH|GHtj�dkr`tdgdd��}|j�ndGHtd	gdt�}td
dgd|jdt�}t|j�d�GHHd
GHytdg�j�GHWnFtk
r}|j	t	j
krdGHdGH|jGHq$dG|j	GHnXtj
dIJdS(NtpsRZis
Process list:tidR�cSs
tjd�S(Nid(R}tsetuid(((s"/usr/lib64/python2.7/subprocess.pyt<lambda>R`sLooking for 'hda'...tdmesgtgrepthdaRtsTrying a weird file...s/this/path/does/not/exists'The file didn't exist.  I thought so...sChild traceback:tErrorsGosh.  No error.(RRR[R}tgetuidRURZtreprR1R2tENOENTRR)Ru(tplisttptp1tp2R6((s"/usr/lib64/python2.7/subprocess.pyt_demo_posixs*
!cCsldGHtddtdt�}tdd|jdt�}t|j�d�GHdGHtd	�}|j�dS(
Ns%Looking for 'PROMPT' in set output...R�RZR�s
find "PROMPT"RtisExecuting calc...tcalc(RRR0RZR^R[RU(RbRcRa((s"/usr/lib64/python2.7/subprocess.pyt
_demo_windows+st__main__(5R
R)tplatformRsR}R�RRR�R2RzRR�R�R�RRR9R�R-tImportErrorRR�RRMR=t__all__RRRR R!R"R#R$RctsysconfR�R'R/RRR7RTRRRRkRMRRdRfR(((s"/usr/lib64/python2.7/subprocess.pyt<module>sp

:
						!	F����	)	
r"""HTTP cookie handling for web clients.

This module has (now fairly distant) origins in Gisle Aas' Perl module
HTTP::Cookies, from the libwww-perl library.

Docstrings, comments and debug strings in this code refer to the
attributes of the HTTP cookie system as cookie-attributes, to distinguish
them clearly from Python attributes.

Class diagram (note that BSDDBCookieJar and the MSIE* classes are not
distributed with the Python standard library, but are available from
http://wwwsearch.sf.net/):

                        CookieJar____
                        /     \      \
            FileCookieJar      \      \
             /    |   \         \      \
 MozillaCookieJar | LWPCookieJar \      \
                  |               |      \
                  |   ---MSIEBase |       \
                  |  /      |     |        \
                  | /   MSIEDBCookieJar BSDDBCookieJar
                  |/
               MSIECookieJar

"""

__all__ = ['Cookie', 'CookieJar', 'CookiePolicy', 'DefaultCookiePolicy',
           'FileCookieJar', 'LWPCookieJar', 'lwp_cookie_str', 'LoadError',
           'MozillaCookieJar']

import re, urlparse, copy, time, urllib
try:
    import threading as _threading
except ImportError:
    import dummy_threading as _threading
import httplib  # only for the default HTTP port
from calendar import timegm

debug = False   # set to True to enable debugging via the logging module
logger = None

def _debug(*args):
    if not debug:
        return
    global logger
    if not logger:
        import logging
        logger = logging.getLogger("cookielib")
    return logger.debug(*args)


DEFAULT_HTTP_PORT = str(httplib.HTTP_PORT)
MISSING_FILENAME_TEXT = ("a filename was not supplied (nor was the CookieJar "
                         "instance initialised with one)")

def _warn_unhandled_exception():
    # There are a few catch-all except: statements in this module, for
    # catching input that's bad in unexpected ways.  Warn if any
    # exceptions are caught there.
    import warnings, traceback, StringIO
    f = StringIO.StringIO()
    traceback.print_exc(None, f)
    msg = f.getvalue()
    warnings.warn("cookielib bug!\n%s" % msg, stacklevel=2)


# Date/time conversion
# -----------------------------------------------------------------------------

EPOCH_YEAR = 1970
def _timegm(tt):
    year, month, mday, hour, min, sec = tt[:6]
    if ((year >= EPOCH_YEAR) and (1 <= month <= 12) and (1 <= mday <= 31) and
        (0 <= hour <= 24) and (0 <= min <= 59) and (0 <= sec <= 61)):
        return timegm(tt)
    else:
        return None

DAYS = ["Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"]
MONTHS = ["Jan", "Feb", "Mar", "Apr", "May", "Jun",
          "Jul", "Aug", "Sep", "Oct", "Nov", "Dec"]
MONTHS_LOWER = []
for month in MONTHS: MONTHS_LOWER.append(month.lower())

def time2isoz(t=None):
    """Return a string representing time in seconds since epoch, t.

    If the function is called without an argument, it will use the current
    time.

    The format of the returned string is like "YYYY-MM-DD hh:mm:ssZ",
    representing Universal Time (UTC, aka GMT).  An example of this format is:

    1994-11-24 08:49:37Z

    """
    if t is None: t = time.time()
    year, mon, mday, hour, min, sec = time.gmtime(t)[:6]
    return "%04d-%02d-%02d %02d:%02d:%02dZ" % (
        year, mon, mday, hour, min, sec)

def time2netscape(t=None):
    """Return a string representing time in seconds since epoch, t.

    If the function is called without an argument, it will use the current
    time.

    The format of the returned string is like this:

    Wed, DD-Mon-YYYY HH:MM:SS GMT

    """
    if t is None: t = time.time()
    year, mon, mday, hour, min, sec, wday = time.gmtime(t)[:7]
    return "%s, %02d-%s-%04d %02d:%02d:%02d GMT" % (
        DAYS[wday], mday, MONTHS[mon-1], year, hour, min, sec)


UTC_ZONES = {"GMT": None, "UTC": None, "UT": None, "Z": None}

TIMEZONE_RE = re.compile(r"^([-+])?(\d\d?):?(\d\d)?$")
def offset_from_tz_string(tz):
    offset = None
    if tz in UTC_ZONES:
        offset = 0
    else:
        m = TIMEZONE_RE.search(tz)
        if m:
            offset = 3600 * int(m.group(2))
            if m.group(3):
                offset = offset + 60 * int(m.group(3))
            if m.group(1) == '-':
                offset = -offset
    return offset

def _str2time(day, mon, yr, hr, min, sec, tz):
    # translate month name to number
    # month numbers start with 1 (January)
    try:
        mon = MONTHS_LOWER.index(mon.lower())+1
    except ValueError:
        # maybe it's already a number
        try:
            imon = int(mon)
        except ValueError:
            return None
        if 1 <= imon <= 12:
            mon = imon
        else:
            return None

    # make sure clock elements are defined
    if hr is None: hr = 0
    if min is None: min = 0
    if sec is None: sec = 0

    yr = int(yr)
    day = int(day)
    hr = int(hr)
    min = int(min)
    sec = int(sec)

    if yr < 1000:
        # find "obvious" year
        cur_yr = time.localtime(time.time())[0]
        m = cur_yr % 100
        tmp = yr
        yr = yr + cur_yr - m
        m = m - tmp
        if abs(m) > 50:
            if m > 0: yr = yr + 100
            else: yr = yr - 100

    # convert UTC time tuple to seconds since epoch (not timezone-adjusted)
    t = _timegm((yr, mon, day, hr, min, sec, tz))

    if t is not None:
        # adjust time using timezone string, to get absolute time since epoch
        if tz is None:
            tz = "UTC"
        tz = tz.upper()
        offset = offset_from_tz_string(tz)
        if offset is None:
            return None
        t = t - offset

    return t

STRICT_DATE_RE = re.compile(
    r"^[SMTWF][a-z][a-z], (\d\d) ([JFMASOND][a-z][a-z]) "
    "(\d\d\d\d) (\d\d):(\d\d):(\d\d) GMT$")
WEEKDAY_RE = re.compile(
    r"^(?:Sun|Mon|Tue|Wed|Thu|Fri|Sat)[a-z]*,?\s*", re.I)
LOOSE_HTTP_DATE_RE = re.compile(
    r"""^
    (\d\d?)            # day
       (?:\s+|[-\/])
    (\w+)              # month
        (?:\s+|[-\/])
    (\d+)              # year
    (?:
          (?:\s+|:)    # separator before clock
       (\d\d?):(\d\d)  # hour:min
       (?::(\d\d))?    # optional seconds
    )?                 # optional clock
       \s*
    (?:
       ([-+]?\d{2,4}|(?![APap][Mm]\b)[A-Za-z]+) # timezone
       \s*
    )?
    (?:
       \(\w+\)         # ASCII representation of timezone in parens.
       \s*
    )?$""", re.X)
def http2time(text):
    """Returns time in seconds since epoch of time represented by a string.

    Return value is an integer.

    None is returned if the format of str is unrecognized, the time is outside
    the representable range, or the timezone string is not recognized.  If the
    string contains no timezone, UTC is assumed.

    The timezone in the string may be numerical (like "-0800" or "+0100") or a
    string timezone (like "UTC", "GMT", "BST" or "EST").  Currently, only the
    timezone strings equivalent to UTC (zero offset) are known to the function.

    The function loosely parses the following formats:

    Wed, 09 Feb 1994 22:23:32 GMT       -- HTTP format
    Tuesday, 08-Feb-94 14:15:29 GMT     -- old rfc850 HTTP format
    Tuesday, 08-Feb-1994 14:15:29 GMT   -- broken rfc850 HTTP format
    09 Feb 1994 22:23:32 GMT            -- HTTP format (no weekday)
    08-Feb-94 14:15:29 GMT              -- rfc850 format (no weekday)
    08-Feb-1994 14:15:29 GMT            -- broken rfc850 format (no weekday)

    The parser ignores leading and trailing whitespace.  The time may be
    absent.

    If the year is given with only 2 digits, the function will select the
    century that makes the year closest to the current date.

    """
    # fast exit for strictly conforming string
    m = STRICT_DATE_RE.search(text)
    if m:
        g = m.groups()
        mon = MONTHS_LOWER.index(g[1].lower()) + 1
        tt = (int(g[2]), mon, int(g[0]),
              int(g[3]), int(g[4]), float(g[5]))
        return _timegm(tt)

    # No, we need some messy parsing...

    # clean up
    text = text.lstrip()
    text = WEEKDAY_RE.sub("", text, 1)  # Useless weekday

    # tz is time zone specifier string
    day, mon, yr, hr, min, sec, tz = [None]*7

    # loose regexp parse
    m = LOOSE_HTTP_DATE_RE.search(text)
    if m is not None:
        day, mon, yr, hr, min, sec, tz = m.groups()
    else:
        return None  # bad format

    return _str2time(day, mon, yr, hr, min, sec, tz)

ISO_DATE_RE = re.compile(
    r"""^
    (\d{4})              # year
       [-\/]?
    (\d\d?)              # numerical month
       [-\/]?
    (\d\d?)              # day
   (?:
         (?:\s+|[-:Tt])  # separator before clock
      (\d\d?):?(\d\d)    # hour:min
      (?::?(\d\d(?:\.\d*)?))?  # optional seconds (and fractional)
   )?                    # optional clock
      \s*
   (?:
      ([-+]?\d\d?:?(:?\d\d)?
       |Z|z)             # timezone  (Z is "zero meridian", i.e. GMT)
      \s*
   )?$""", re.X)
def iso2time(text):
    """
    As for http2time, but parses the ISO 8601 formats:

    1994-02-03 14:15:29 -0100    -- ISO 8601 format
    1994-02-03 14:15:29          -- zone is optional
    1994-02-03                   -- only date
    1994-02-03T14:15:29          -- Use T as separator
    19940203T141529Z             -- ISO 8601 compact format
    19940203                     -- only date

    """
    # clean up
    text = text.lstrip()

    # tz is time zone specifier string
    day, mon, yr, hr, min, sec, tz = [None]*7

    # loose regexp parse
    m = ISO_DATE_RE.search(text)
    if m is not None:
        # XXX there's an extra bit of the timezone I'm ignoring here: is
        #   this the right thing to do?
        yr, mon, day, hr, min, sec, tz, _ = m.groups()
    else:
        return None  # bad format

    return _str2time(day, mon, yr, hr, min, sec, tz)


# Header parsing
# -----------------------------------------------------------------------------

def unmatched(match):
    """Return unmatched part of re.Match object."""
    start, end = match.span(0)
    return match.string[:start]+match.string[end:]

HEADER_TOKEN_RE =        re.compile(r"^\s*([^=\s;,]+)")
HEADER_QUOTED_VALUE_RE = re.compile(r"^\s*=\s*\"([^\"\\]*(?:\\.[^\"\\]*)*)\"")
HEADER_VALUE_RE =        re.compile(r"^\s*=\s*([^\s;,]*)")
HEADER_ESCAPE_RE = re.compile(r"\\(.)")
def split_header_words(header_values):
    r"""Parse header values into a list of lists containing key,value pairs.

    The function knows how to deal with ",", ";" and "=" as well as quoted
    values after "=".  A list of space separated tokens are parsed as if they
    were separated by ";".

    If the header_values passed as argument contains multiple values, then they
    are treated as if they were a single value separated by comma ",".

    This means that this function is useful for parsing header fields that
    follow this syntax (BNF as from the HTTP/1.1 specification, but we relax
    the requirement for tokens).

      headers           = #header
      header            = (token | parameter) *( [";"] (token | parameter))

      token             = 1*<any CHAR except CTLs or separators>
      separators        = "(" | ")" | "<" | ">" | "@"
                        | "," | ";" | ":" | "\" | <">
                        | "/" | "[" | "]" | "?" | "="
                        | "{" | "}" | SP | HT

      quoted-string     = ( <"> *(qdtext | quoted-pair ) <"> )
      qdtext            = <any TEXT except <">>
      quoted-pair       = "\" CHAR

      parameter         = attribute "=" value
      attribute         = token
      value             = token | quoted-string

    Each header is represented by a list of key/value pairs.  The value for a
    simple token (not part of a parameter) is None.  Syntactically incorrect
    headers will not necessarily be parsed as you would want.

    This is easier to describe with some examples:

    >>> split_header_words(['foo="bar"; port="80,81"; discard, bar=baz'])
    [[('foo', 'bar'), ('port', '80,81'), ('discard', None)], [('bar', 'baz')]]
    >>> split_header_words(['text/html; charset="iso-8859-1"'])
    [[('text/html', None), ('charset', 'iso-8859-1')]]
    >>> split_header_words([r'Basic realm="\"foo\bar\""'])
    [[('Basic', None), ('realm', '"foobar"')]]

    """
    assert not isinstance(header_values, basestring)
    result = []
    for text in header_values:
        orig_text = text
        pairs = []
        while text:
            m = HEADER_TOKEN_RE.search(text)
            if m:
                text = unmatched(m)
                name = m.group(1)
                m = HEADER_QUOTED_VALUE_RE.search(text)
                if m:  # quoted value
                    text = unmatched(m)
                    value = m.group(1)
                    value = HEADER_ESCAPE_RE.sub(r"\1", value)
                else:
                    m = HEADER_VALUE_RE.search(text)
                    if m:  # unquoted value
                        text = unmatched(m)
                        value = m.group(1)
                        value = value.rstrip()
                    else:
                        # no value, a lone token
                        value = None
                pairs.append((name, value))
            elif text.lstrip().startswith(","):
                # concatenated headers, as per RFC 2616 section 4.2
                text = text.lstrip()[1:]
                if pairs: result.append(pairs)
                pairs = []
            else:
                # skip junk
                non_junk, nr_junk_chars = re.subn("^[=\s;]*", "", text)
                assert nr_junk_chars > 0, (
                    "split_header_words bug: '%s', '%s', %s" %
                    (orig_text, text, pairs))
                text = non_junk
        if pairs: result.append(pairs)
    return result

HEADER_JOIN_ESCAPE_RE = re.compile(r"([\"\\])")
def join_header_words(lists):
    """Do the inverse (almost) of the conversion done by split_header_words.

    Takes a list of lists of (key, value) pairs and produces a single header
    value.  Attribute values are quoted if needed.

    >>> join_header_words([[("text/plain", None), ("charset", "iso-8859/1")]])
    'text/plain; charset="iso-8859/1"'
    >>> join_header_words([[("text/plain", None)], [("charset", "iso-8859/1")]])
    'text/plain, charset="iso-8859/1"'

    """
    headers = []
    for pairs in lists:
        attr = []
        for k, v in pairs:
            if v is not None:
                if not re.search(r"^\w+$", v):
                    v = HEADER_JOIN_ESCAPE_RE.sub(r"\\\1", v)  # escape " and \
                    v = '"%s"' % v
                k = "%s=%s" % (k, v)
            attr.append(k)
        if attr: headers.append("; ".join(attr))
    return ", ".join(headers)

def _strip_quotes(text):
    if text.startswith('"'):
        text = text[1:]
    if text.endswith('"'):
        text = text[:-1]
    return text

def parse_ns_headers(ns_headers):
    """Ad-hoc parser for Netscape protocol cookie-attributes.

    The old Netscape cookie format for Set-Cookie can for instance contain
    an unquoted "," in the expires field, so we have to use this ad-hoc
    parser instead of split_header_words.

    XXX This may not make the best possible effort to parse all the crap
    that Netscape Cookie headers contain.  Ronald Tschalar's HTTPClient
    parser is probably better, so could do worse than following that if
    this ever gives any trouble.

    Currently, this is also used for parsing RFC 2109 cookies.

    """
    known_attrs = ("expires", "domain", "path", "secure",
                   # RFC 2109 attrs (may turn up in Netscape cookies, too)
                   "version", "port", "max-age")

    result = []
    for ns_header in ns_headers:
        pairs = []
        version_set = False

        # XXX: The following does not strictly adhere to RFCs in that empty
        # names and values are legal (the former will only appear once and will
        # be overwritten if multiple occurrences are present). This is
        # mostly to deal with backwards compatibility.
        for ii, param in enumerate(ns_header.split(';')):
            param = param.strip()

            key, sep, val = param.partition('=')
            key = key.strip()

            if not key:
                if ii == 0:
                    break
                else:
                    continue

            # allow for a distinction between present and empty and missing
            # altogether
            val = val.strip() if sep else None

            if ii != 0:
                lc = key.lower()
                if lc in known_attrs:
                    key = lc

                if key == "version":
                    # This is an RFC 2109 cookie.
                    if val is not None:
                        val = _strip_quotes(val)
                    version_set = True
                elif key == "expires":
                    # convert expires date to seconds since epoch
                    if val is not None:
                        val = http2time(_strip_quotes(val))  # None if invalid
            pairs.append((key, val))

        if pairs:
            if not version_set:
                pairs.append(("version", "0"))
            result.append(pairs)

    return result


IPV4_RE = re.compile(r"\.\d+$")
def is_HDN(text):
    """Return True if text is a host domain name."""
    # XXX
    # This may well be wrong.  Which RFC is HDN defined in, if any (for
    #  the purposes of RFC 2965)?
    # For the current implementation, what about IPv6?  Remember to look
    #  at other uses of IPV4_RE also, if change this.
    if IPV4_RE.search(text):
        return False
    if text == "":
        return False
    if text[0] == "." or text[-1] == ".":
        return False
    return True

def domain_match(A, B):
    """Return True if domain A domain-matches domain B, according to RFC 2965.

    A and B may be host domain names or IP addresses.

    RFC 2965, section 1:

    Host names can be specified either as an IP address or a HDN string.
    Sometimes we compare one host name with another.  (Such comparisons SHALL
    be case-insensitive.)  Host A's name domain-matches host B's if

         *  their host name strings string-compare equal; or

         * A is a HDN string and has the form NB, where N is a non-empty
            name string, B has the form .B', and B' is a HDN string.  (So,
            x.y.com domain-matches .Y.com but not Y.com.)

    Note that domain-match is not a commutative operation: a.b.c.com
    domain-matches .c.com, but not the reverse.

    """
    # Note that, if A or B are IP addresses, the only relevant part of the
    # definition of the domain-match algorithm is the direct string-compare.
    A = A.lower()
    B = B.lower()
    if A == B:
        return True
    if not is_HDN(A):
        return False
    i = A.rfind(B)
    if i == -1 or i == 0:
        # A does not have form NB, or N is the empty string
        return False
    if not B.startswith("."):
        return False
    if not is_HDN(B[1:]):
        return False
    return True

def liberal_is_HDN(text):
    """Return True if text is a sort-of-like a host domain name.

    For accepting/blocking domains.

    """
    if IPV4_RE.search(text):
        return False
    return True

def user_domain_match(A, B):
    """For blocking/accepting domains.

    A and B may be host domain names or IP addresses.

    """
    A = A.lower()
    B = B.lower()
    if not (liberal_is_HDN(A) and liberal_is_HDN(B)):
        if A == B:
            # equal IP addresses
            return True
        return False
    initial_dot = B.startswith(".")
    if initial_dot and A.endswith(B):
        return True
    if not initial_dot and A == B:
        return True
    return False

cut_port_re = re.compile(r":\d+$")
def request_host(request):
    """Return request-host, as defined by RFC 2965.

    Variation from RFC: returned value is lowercased, for convenient
    comparison.

    """
    url = request.get_full_url()
    host = urlparse.urlparse(url)[1]
    if host == "":
        host = request.get_header("Host", "")

    # remove port, if present
    host = cut_port_re.sub("", host, 1)
    return host.lower()

def eff_request_host(request):
    """Return a tuple (request-host, effective request-host name).

    As defined by RFC 2965, except both are lowercased.

    """
    erhn = req_host = request_host(request)
    if req_host.find(".") == -1 and not IPV4_RE.search(req_host):
        erhn = req_host + ".local"
    return req_host, erhn

def request_path(request):
    """Path component of request-URI, as defined by RFC 2965."""
    url = request.get_full_url()
    parts = urlparse.urlsplit(url)
    path = escape_path(parts.path)
    if not path.startswith("/"):
        # fix bad RFC 2396 absoluteURI
        path = "/" + path
    return path

def request_port(request):
    host = request.get_host()
    i = host.find(':')
    if i >= 0:
        port = host[i+1:]
        try:
            int(port)
        except ValueError:
            _debug("nonnumeric port: '%s'", port)
            return None
    else:
        port = DEFAULT_HTTP_PORT
    return port

# Characters in addition to A-Z, a-z, 0-9, '_', '.', and '-' that don't
# need to be escaped to form a valid HTTP URL (RFCs 2396 and 1738).
HTTP_PATH_SAFE = "%/;:@&=+$,!~*'()"
ESCAPED_CHAR_RE = re.compile(r"%([0-9a-fA-F][0-9a-fA-F])")
def uppercase_escaped_char(match):
    return "%%%s" % match.group(1).upper()
def escape_path(path):
    """Escape any invalid characters in HTTP URL, and uppercase all escapes."""
    # There's no knowing what character encoding was used to create URLs
    # containing %-escapes, but since we have to pick one to escape invalid
    # path characters, we pick UTF-8, as recommended in the HTML 4.0
    # specification:
    # http://www.w3.org/TR/REC-html40/appendix/notes.html#h-B.2.1
    # And here, kind of: draft-fielding-uri-rfc2396bis-03
    # (And in draft IRI specification: draft-duerst-iri-05)
    # (And here, for new URI schemes: RFC 2718)
    if isinstance(path, unicode):
        path = path.encode("utf-8")
    path = urllib.quote(path, HTTP_PATH_SAFE)
    path = ESCAPED_CHAR_RE.sub(uppercase_escaped_char, path)
    return path

def reach(h):
    """Return reach of host h, as defined by RFC 2965, section 1.

    The reach R of a host name H is defined as follows:

       *  If

          -  H is the host domain name of a host; and,

          -  H has the form A.B; and

          -  A has no embedded (that is, interior) dots; and

          -  B has at least one embedded dot, or B is the string "local".
             then the reach of H is .B.

       *  Otherwise, the reach of H is H.

    >>> reach("www.acme.com")
    '.acme.com'
    >>> reach("acme.com")
    'acme.com'
    >>> reach("acme.local")
    '.local'

    """
    i = h.find(".")
    if i >= 0:
        #a = h[:i]  # this line is only here to show what a is
        b = h[i+1:]
        i = b.find(".")
        if is_HDN(h) and (i >= 0 or b == "local"):
            return "."+b
    return h

def is_third_party(request):
    """

    RFC 2965, section 3.3.6:

        An unverifiable transaction is to a third-party host if its request-
        host U does not domain-match the reach R of the request-host O in the
        origin transaction.

    """
    req_host = request_host(request)
    if not domain_match(req_host, reach(request.get_origin_req_host())):
        return True
    else:
        return False


class Cookie:
    """HTTP Cookie.

    This class represents both Netscape and RFC 2965 cookies.

    This is deliberately a very simple class.  It just holds attributes.  It's
    possible to construct Cookie instances that don't comply with the cookie
    standards.  CookieJar.make_cookies is the factory function for Cookie
    objects -- it deals with cookie parsing, supplying defaults, and
    normalising to the representation used in this class.  CookiePolicy is
    responsible for checking them to see whether they should be accepted from
    and returned to the server.

    Note that the port may be present in the headers, but unspecified ("Port"
    rather than"Port=80", for example); if this is the case, port is None.

    """

    def __init__(self, version, name, value,
                 port, port_specified,
                 domain, domain_specified, domain_initial_dot,
                 path, path_specified,
                 secure,
                 expires,
                 discard,
                 comment,
                 comment_url,
                 rest,
                 rfc2109=False,
                 ):

        if version is not None: version = int(version)
        if expires is not None: expires = int(expires)
        if port is None and port_specified is True:
            raise ValueError("if port is None, port_specified must be false")

        self.version = version
        self.name = name
        self.value = value
        self.port = port
        self.port_specified = port_specified
        # normalise case, as per RFC 2965 section 3.3.3
        self.domain = domain.lower()
        self.domain_specified = domain_specified
        # Sigh.  We need to know whether the domain given in the
        # cookie-attribute had an initial dot, in order to follow RFC 2965
        # (as clarified in draft errata).  Needed for the returned $Domain
        # value.
        self.domain_initial_dot = domain_initial_dot
        self.path = path
        self.path_specified = path_specified
        self.secure = secure
        self.expires = expires
        self.discard = discard
        self.comment = comment
        self.comment_url = comment_url
        self.rfc2109 = rfc2109

        self._rest = copy.copy(rest)

    def has_nonstandard_attr(self, name):
        return name in self._rest
    def get_nonstandard_attr(self, name, default=None):
        return self._rest.get(name, default)
    def set_nonstandard_attr(self, name, value):
        self._rest[name] = value

    def is_expired(self, now=None):
        if now is None: now = time.time()
        if (self.expires is not None) and (self.expires <= now):
            return True
        return False

    def __str__(self):
        if self.port is None: p = ""
        else: p = ":"+self.port
        limit = self.domain + p + self.path
        if self.value is not None:
            namevalue = "%s=%s" % (self.name, self.value)
        else:
            namevalue = self.name
        return "<Cookie %s for %s>" % (namevalue, limit)

    def __repr__(self):
        args = []
        for name in ("version", "name", "value",
                     "port", "port_specified",
                     "domain", "domain_specified", "domain_initial_dot",
                     "path", "path_specified",
                     "secure", "expires", "discard", "comment", "comment_url",
                     ):
            attr = getattr(self, name)
            args.append("%s=%s" % (name, repr(attr)))
        args.append("rest=%s" % repr(self._rest))
        args.append("rfc2109=%s" % repr(self.rfc2109))
        return "Cookie(%s)" % ", ".join(args)


class CookiePolicy:
    """Defines which cookies get accepted from and returned to server.

    May also modify cookies, though this is probably a bad idea.

    The subclass DefaultCookiePolicy defines the standard rules for Netscape
    and RFC 2965 cookies -- override that if you want a customised policy.

    """
    def set_ok(self, cookie, request):
        """Return true if (and only if) cookie should be accepted from server.

        Currently, pre-expired cookies never get this far -- the CookieJar
        class deletes such cookies itself.

        """
        raise NotImplementedError()

    def return_ok(self, cookie, request):
        """Return true if (and only if) cookie should be returned to server."""
        raise NotImplementedError()

    def domain_return_ok(self, domain, request):
        """Return false if cookies should not be returned, given cookie domain.
        """
        return True

    def path_return_ok(self, path, request):
        """Return false if cookies should not be returned, given cookie path.
        """
        return True


class DefaultCookiePolicy(CookiePolicy):
    """Implements the standard rules for accepting and returning cookies."""

    DomainStrictNoDots = 1
    DomainStrictNonDomain = 2
    DomainRFC2965Match = 4

    DomainLiberal = 0
    DomainStrict = DomainStrictNoDots|DomainStrictNonDomain

    def __init__(self,
                 blocked_domains=None, allowed_domains=None,
                 netscape=True, rfc2965=False,
                 rfc2109_as_netscape=None,
                 hide_cookie2=False,
                 strict_domain=False,
                 strict_rfc2965_unverifiable=True,
                 strict_ns_unverifiable=False,
                 strict_ns_domain=DomainLiberal,
                 strict_ns_set_initial_dollar=False,
                 strict_ns_set_path=False,
                 ):
        """Constructor arguments should be passed as keyword arguments only."""
        self.netscape = netscape
        self.rfc2965 = rfc2965
        self.rfc2109_as_netscape = rfc2109_as_netscape
        self.hide_cookie2 = hide_cookie2
        self.strict_domain = strict_domain
        self.strict_rfc2965_unverifiable = strict_rfc2965_unverifiable
        self.strict_ns_unverifiable = strict_ns_unverifiable
        self.strict_ns_domain = strict_ns_domain
        self.strict_ns_set_initial_dollar = strict_ns_set_initial_dollar
        self.strict_ns_set_path = strict_ns_set_path

        if blocked_domains is not None:
            self._blocked_domains = tuple(blocked_domains)
        else:
            self._blocked_domains = ()

        if allowed_domains is not None:
            allowed_domains = tuple(allowed_domains)
        self._allowed_domains = allowed_domains

    def blocked_domains(self):
        """Return the sequence of blocked domains (as a tuple)."""
        return self._blocked_domains
    def set_blocked_domains(self, blocked_domains):
        """Set the sequence of blocked domains."""
        self._blocked_domains = tuple(blocked_domains)

    def is_blocked(self, domain):
        for blocked_domain in self._blocked_domains:
            if user_domain_match(domain, blocked_domain):
                return True
        return False

    def allowed_domains(self):
        """Return None, or the sequence of allowed domains (as a tuple)."""
        return self._allowed_domains
    def set_allowed_domains(self, allowed_domains):
        """Set the sequence of allowed domains, or None."""
        if allowed_domains is not None:
            allowed_domains = tuple(allowed_domains)
        self._allowed_domains = allowed_domains

    def is_not_allowed(self, domain):
        if self._allowed_domains is None:
            return False
        for allowed_domain in self._allowed_domains:
            if user_domain_match(domain, allowed_domain):
                return False
        return True

    def set_ok(self, cookie, request):
        """
        If you override .set_ok(), be sure to call this method.  If it returns
        false, so should your subclass (assuming your subclass wants to be more
        strict about which cookies to accept).

        """
        _debug(" - checking cookie %s=%s", cookie.name, cookie.value)

        assert cookie.name is not None

        for n in "version", "verifiability", "name", "path", "domain", "port":
            fn_name = "set_ok_"+n
            fn = getattr(self, fn_name)
            if not fn(cookie, request):
                return False

        return True

    def set_ok_version(self, cookie, request):
        if cookie.version is None:
            # Version is always set to 0 by parse_ns_headers if it's a Netscape
            # cookie, so this must be an invalid RFC 2965 cookie.
            _debug("   Set-Cookie2 without version attribute (%s=%s)",
                   cookie.name, cookie.value)
            return False
        if cookie.version > 0 and not self.rfc2965:
            _debug("   RFC 2965 cookies are switched off")
            return False
        elif cookie.version == 0 and not self.netscape:
            _debug("   Netscape cookies are switched off")
            return False
        return True

    def set_ok_verifiability(self, cookie, request):
        if request.is_unverifiable() and is_third_party(request):
            if cookie.version > 0 and self.strict_rfc2965_unverifiable:
                _debug("   third-party RFC 2965 cookie during "
                             "unverifiable transaction")
                return False
            elif cookie.version == 0 and self.strict_ns_unverifiable:
                _debug("   third-party Netscape cookie during "
                             "unverifiable transaction")
                return False
        return True

    def set_ok_name(self, cookie, request):
        # Try and stop servers setting V0 cookies designed to hack other
        # servers that know both V0 and V1 protocols.
        if (cookie.version == 0 and self.strict_ns_set_initial_dollar and
            cookie.name.startswith("$")):
            _debug("   illegal name (starts with '$'): '%s'", cookie.name)
            return False
        return True

    def set_ok_path(self, cookie, request):
        if cookie.path_specified:
            req_path = request_path(request)
            if ((cookie.version > 0 or
                 (cookie.version == 0 and self.strict_ns_set_path)) and
                not self.path_return_ok(cookie.path, request)):
                _debug("   path attribute %s is not a prefix of request "
                       "path %s", cookie.path, req_path)
                return False
        return True

    def set_ok_domain(self, cookie, request):
        if self.is_blocked(cookie.domain):
            _debug("   domain %s is in user block-list", cookie.domain)
            return False
        if self.is_not_allowed(cookie.domain):
            _debug("   domain %s is not in user allow-list", cookie.domain)
            return False
        if cookie.domain_specified:
            req_host, erhn = eff_request_host(request)
            domain = cookie.domain
            if self.strict_domain and (domain.count(".") >= 2):
                # XXX This should probably be compared with the Konqueror
                # (kcookiejar.cpp) and Mozilla implementations, but it's a
                # losing battle.
                i = domain.rfind(".")
                j = domain.rfind(".", 0, i)
                if j == 0:  # domain like .foo.bar
                    tld = domain[i+1:]
                    sld = domain[j+1:i]
                    if sld.lower() in ("co", "ac", "com", "edu", "org", "net",
                       "gov", "mil", "int", "aero", "biz", "cat", "coop",
                       "info", "jobs", "mobi", "museum", "name", "pro",
                       "travel", "eu") and len(tld) == 2:
                        # domain like .co.uk
                        _debug("   country-code second level domain %s", domain)
                        return False
            if domain.startswith("."):
                undotted_domain = domain[1:]
            else:
                undotted_domain = domain
            embedded_dots = (undotted_domain.find(".") >= 0)
            if not embedded_dots and domain != ".local":
                _debug("   non-local domain %s contains no embedded dot",
                       domain)
                return False
            if cookie.version == 0:
                if (not erhn.endswith(domain) and
                    (not erhn.startswith(".") and
                     not ("."+erhn).endswith(domain))):
                    _debug("   effective request-host %s (even with added "
                           "initial dot) does not end with %s",
                           erhn, domain)
                    return False
            if (cookie.version > 0 or
                (self.strict_ns_domain & self.DomainRFC2965Match)):
                if not domain_match(erhn, domain):
                    _debug("   effective request-host %s does not domain-match "
                           "%s", erhn, domain)
                    return False
            if (cookie.version > 0 or
                (self.strict_ns_domain & self.DomainStrictNoDots)):
                host_prefix = req_host[:-len(domain)]
                if (host_prefix.find(".") >= 0 and
                    not IPV4_RE.search(req_host)):
                    _debug("   host prefix %s for domain %s contains a dot",
                           host_prefix, domain)
                    return False
        return True

    def set_ok_port(self, cookie, request):
        if cookie.port_specified:
            req_port = request_port(request)
            if req_port is None:
                req_port = "80"
            else:
                req_port = str(req_port)
            for p in cookie.port.split(","):
                try:
                    int(p)
                except ValueError:
                    _debug("   bad port %s (not numeric)", p)
                    return False
                if p == req_port:
                    break
            else:
                _debug("   request port (%s) not found in %s",
                       req_port, cookie.port)
                return False
        return True

    def return_ok(self, cookie, request):
        """
        If you override .return_ok(), be sure to call this method.  If it
        returns false, so should your subclass (assuming your subclass wants to
        be more strict about which cookies to return).

        """
        # Path has already been checked by .path_return_ok(), and domain
        # blocking done by .domain_return_ok().
        _debug(" - checking cookie %s=%s", cookie.name, cookie.value)

        for n in "version", "verifiability", "secure", "expires", "port", "domain":
            fn_name = "return_ok_"+n
            fn = getattr(self, fn_name)
            if not fn(cookie, request):
                return False
        return True

    def return_ok_version(self, cookie, request):
        if cookie.version > 0 and not self.rfc2965:
            _debug("   RFC 2965 cookies are switched off")
            return False
        elif cookie.version == 0 and not self.netscape:
            _debug("   Netscape cookies are switched off")
            return False
        return True

    def return_ok_verifiability(self, cookie, request):
        if request.is_unverifiable() and is_third_party(request):
            if cookie.version > 0 and self.strict_rfc2965_unverifiable:
                _debug("   third-party RFC 2965 cookie during unverifiable "
                       "transaction")
                return False
            elif cookie.version == 0 and self.strict_ns_unverifiable:
                _debug("   third-party Netscape cookie during unverifiable "
                       "transaction")
                return False
        return True

    def return_ok_secure(self, cookie, request):
        if cookie.secure and request.get_type() != "https":
            _debug("   secure cookie with non-secure request")
            return False
        return True

    def return_ok_expires(self, cookie, request):
        if cookie.is_expired(self._now):
            _debug("   cookie expired")
            return False
        return True

    def return_ok_port(self, cookie, request):
        if cookie.port:
            req_port = request_port(request)
            if req_port is None:
                req_port = "80"
            for p in cookie.port.split(","):
                if p == req_port:
                    break
            else:
                _debug("   request port %s does not match cookie port %s",
                       req_port, cookie.port)
                return False
        return True

    def return_ok_domain(self, cookie, request):
        req_host, erhn = eff_request_host(request)
        domain = cookie.domain

        if domain and not domain.startswith("."):
            dotdomain = "." + domain
        else:
            dotdomain = domain

        # strict check of non-domain cookies: Mozilla does this, MSIE5 doesn't
        if (cookie.version == 0 and
            (self.strict_ns_domain & self.DomainStrictNonDomain) and
            not cookie.domain_specified and domain != erhn):
            _debug("   cookie with unspecified domain does not string-compare "
                   "equal to request domain")
            return False

        if cookie.version > 0 and not domain_match(erhn, domain):
            _debug("   effective request-host name %s does not domain-match "
                   "RFC 2965 cookie domain %s", erhn, domain)
            return False
        if cookie.version == 0 and not ("."+erhn).endswith(dotdomain):
            _debug("   request-host %s does not match Netscape cookie domain "
                   "%s", req_host, domain)
            return False
        return True

    def domain_return_ok(self, domain, request):
        # Liberal check of.  This is here as an optimization to avoid
        # having to load lots of MSIE cookie files unless necessary.
        req_host, erhn = eff_request_host(request)
        if not req_host.startswith("."):
            req_host = "."+req_host
        if not erhn.startswith("."):
            erhn = "."+erhn
        if domain and not domain.startswith("."):
            dotdomain = "." + domain
        else:
            dotdomain = domain
        if not (req_host.endswith(dotdomain) or erhn.endswith(dotdomain)):
            #_debug("   request domain %s does not match cookie domain %s",
            #       req_host, domain)
            return False

        if self.is_blocked(domain):
            _debug("   domain %s is in user block-list", domain)
            return False
        if self.is_not_allowed(domain):
            _debug("   domain %s is not in user allow-list", domain)
            return False

        return True

    def path_return_ok(self, path, request):
        _debug("- checking cookie path=%s", path)
        req_path = request_path(request)
        pathlen = len(path)
        if req_path == path:
            return True
        elif (req_path.startswith(path) and
              (path.endswith("/") or req_path[pathlen:pathlen+1] == "/")):
            return True

        _debug("  %s does not path-match %s", req_path, path)
        return False

def vals_sorted_by_key(adict):
    keys = adict.keys()
    keys.sort()
    return map(adict.get, keys)

def deepvalues(mapping):
    """Iterates over nested mapping, depth-first, in sorted order by key."""
    values = vals_sorted_by_key(mapping)
    for obj in values:
        mapping = False
        try:
            obj.items
        except AttributeError:
            pass
        else:
            mapping = True
            for subobj in deepvalues(obj):
                yield subobj
        if not mapping:
            yield obj


# Used as second parameter to dict.get() method, to distinguish absent
# dict key from one with a None value.
class Absent: pass

class CookieJar:
    """Collection of HTTP cookies.

    You may not need to know about this class: try
    urllib2.build_opener(HTTPCookieProcessor).open(url).

    """

    non_word_re = re.compile(r"\W")
    quote_re = re.compile(r"([\"\\])")
    strict_domain_re = re.compile(r"\.?[^.]*")
    domain_re = re.compile(r"[^.]*")
    dots_re = re.compile(r"^\.+")

    magic_re = r"^\#LWP-Cookies-(\d+\.\d+)"

    def __init__(self, policy=None):
        if policy is None:
            policy = DefaultCookiePolicy()
        self._policy = policy

        self._cookies_lock = _threading.RLock()
        self._cookies = {}

    def set_policy(self, policy):
        self._policy = policy

    def _cookies_for_domain(self, domain, request):
        cookies = []
        if not self._policy.domain_return_ok(domain, request):
            return []
        _debug("Checking %s for cookies to return", domain)
        cookies_by_path = self._cookies[domain]
        for path in cookies_by_path.keys():
            if not self._policy.path_return_ok(path, request):
                continue
            cookies_by_name = cookies_by_path[path]
            for cookie in cookies_by_name.values():
                if not self._policy.return_ok(cookie, request):
                    _debug("   not returning cookie")
                    continue
                _debug("   it's a match")
                cookies.append(cookie)
        return cookies

    def _cookies_for_request(self, request):
        """Return a list of cookies to be returned to server."""
        cookies = []
        for domain in self._cookies.keys():
            cookies.extend(self._cookies_for_domain(domain, request))
        return cookies

    def _cookie_attrs(self, cookies):
        """Return a list of cookie-attributes to be returned to server.

        like ['foo="bar"; $Path="/"', ...]

        The $Version attribute is also added when appropriate (currently only
        once per request).

        """
        # add cookies in order of most specific (ie. longest) path first
        cookies.sort(key=lambda arg: len(arg.path), reverse=True)

        version_set = False

        attrs = []
        for cookie in cookies:
            # set version of Cookie header
            # XXX
            # What should it be if multiple matching Set-Cookie headers have
            #  different versions themselves?
            # Answer: there is no answer; was supposed to be settled by
            #  RFC 2965 errata, but that may never appear...
            version = cookie.version
            if not version_set:
                version_set = True
                if version > 0:
                    attrs.append("$Version=%s" % version)

            # quote cookie value if necessary
            # (not for Netscape protocol, which already has any quotes
            #  intact, due to the poorly-specified Netscape Cookie: syntax)
            if ((cookie.value is not None) and
                self.non_word_re.search(cookie.value) and version > 0):
                value = self.quote_re.sub(r"\\\1", cookie.value)
            else:
                value = cookie.value

            # add cookie-attributes to be returned in Cookie header
            if cookie.value is None:
                attrs.append(cookie.name)
            else:
                attrs.append("%s=%s" % (cookie.name, value))
            if version > 0:
                if cookie.path_specified:
                    attrs.append('$Path="%s"' % cookie.path)
                if cookie.domain.startswith("."):
                    domain = cookie.domain
                    if (not cookie.domain_initial_dot and
                        domain.startswith(".")):
                        domain = domain[1:]
                    attrs.append('$Domain="%s"' % domain)
                if cookie.port is not None:
                    p = "$Port"
                    if cookie.port_specified:
                        p = p + ('="%s"' % cookie.port)
                    attrs.append(p)

        return attrs

    def add_cookie_header(self, request):
        """Add correct Cookie: header to request (urllib2.Request object).

        The Cookie2 header is also added unless policy.hide_cookie2 is true.

        """
        _debug("add_cookie_header")
        self._cookies_lock.acquire()
        try:

            self._policy._now = self._now = int(time.time())

            cookies = self._cookies_for_request(request)

            attrs = self._cookie_attrs(cookies)
            if attrs:
                if not request.has_header("Cookie"):
                    request.add_unredirected_header(
                        "Cookie", "; ".join(attrs))

            # if necessary, advertise that we know RFC 2965
            if (self._policy.rfc2965 and not self._policy.hide_cookie2 and
                not request.has_header("Cookie2")):
                for cookie in cookies:
                    if cookie.version != 1:
                        request.add_unredirected_header("Cookie2", '$Version="1"')
                        break

        finally:
            self._cookies_lock.release()

        self.clear_expired_cookies()

    def _normalized_cookie_tuples(self, attrs_set):
        """Return list of tuples containing normalised cookie information.

        attrs_set is the list of lists of key,value pairs extracted from
        the Set-Cookie or Set-Cookie2 headers.

        Tuples are name, value, standard, rest, where name and value are the
        cookie name and value, standard is a dictionary containing the standard
        cookie-attributes (discard, secure, version, expires or max-age,
        domain, path and port) and rest is a dictionary containing the rest of
        the cookie-attributes.

        """
        cookie_tuples = []

        boolean_attrs = "discard", "secure"
        value_attrs = ("version",
                       "expires", "max-age",
                       "domain", "path", "port",
                       "comment", "commenturl")

        for cookie_attrs in attrs_set:
            name, value = cookie_attrs[0]

            # Build dictionary of standard cookie-attributes (standard) and
            # dictionary of other cookie-attributes (rest).

            # Note: expiry time is normalised to seconds since epoch.  V0
            # cookies should have the Expires cookie-attribute, and V1 cookies
            # should have Max-Age, but since V1 includes RFC 2109 cookies (and
            # since V0 cookies may be a mish-mash of Netscape and RFC 2109), we
            # accept either (but prefer Max-Age).
            max_age_set = False

            bad_cookie = False

            standard = {}
            rest = {}
            for k, v in cookie_attrs[1:]:
                lc = k.lower()
                # don't lose case distinction for unknown fields
                if lc in value_attrs or lc in boolean_attrs:
                    k = lc
                if k in boolean_attrs and v is None:
                    # boolean cookie-attribute is present, but has no value
                    # (like "discard", rather than "port=80")
                    v = True
                if k in standard:
                    # only first value is significant
                    continue
                if k == "domain":
                    if v is None:
                        _debug("   missing value for domain attribute")
                        bad_cookie = True
                        break
                    # RFC 2965 section 3.3.3
                    v = v.lower()
                if k == "expires":
                    if max_age_set:
                        # Prefer max-age to expires (like Mozilla)
                        continue
                    if v is None:
                        _debug("   missing or invalid value for expires "
                              "attribute: treating as session cookie")
                        continue
                if k == "max-age":
                    max_age_set = True
                    try:
                        v = int(v)
                    except ValueError:
                        _debug("   missing or invalid (non-numeric) value for "
                              "max-age attribute")
                        bad_cookie = True
                        break
                    # convert RFC 2965 Max-Age to seconds since epoch
                    # XXX Strictly you're supposed to follow RFC 2616
                    #   age-calculation rules.  Remember that zero Max-Age
                    #   is a request to discard (old and new) cookie, though.
                    k = "expires"
                    v = self._now + v
                if (k in value_attrs) or (k in boolean_attrs):
                    if (v is None and
                        k not in ("port", "comment", "commenturl")):
                        _debug("   missing value for %s attribute" % k)
                        bad_cookie = True
                        break
                    standard[k] = v
                else:
                    rest[k] = v

            if bad_cookie:
                continue

            cookie_tuples.append((name, value, standard, rest))

        return cookie_tuples

    def _cookie_from_cookie_tuple(self, tup, request):
        # standard is dict of standard cookie-attributes, rest is dict of the
        # rest of them
        name, value, standard, rest = tup

        domain = standard.get("domain", Absent)
        path = standard.get("path", Absent)
        port = standard.get("port", Absent)
        expires = standard.get("expires", Absent)

        # set the easy defaults
        version = standard.get("version", None)
        if version is not None:
            try:
                version = int(version)
            except ValueError:
                return None  # invalid version, ignore cookie
        secure = standard.get("secure", False)
        # (discard is also set if expires is Absent)
        discard = standard.get("discard", False)
        comment = standard.get("comment", None)
        comment_url = standard.get("commenturl", None)

        # set default path
        if path is not Absent and path != "":
            path_specified = True
            path = escape_path(path)
        else:
            path_specified = False
            path = request_path(request)
            i = path.rfind("/")
            if i != -1:
                if version == 0:
                    # Netscape spec parts company from reality here
                    path = path[:i]
                else:
                    path = path[:i+1]
            if len(path) == 0: path = "/"

        # set default domain
        domain_specified = domain is not Absent
        # but first we have to remember whether it starts with a dot
        domain_initial_dot = False
        if domain_specified:
            domain_initial_dot = bool(domain.startswith("."))
        if domain is Absent:
            req_host, erhn = eff_request_host(request)
            domain = erhn
        elif not domain.startswith("."):
            domain = "."+domain

        # set default port
        port_specified = False
        if port is not Absent:
            if port is None:
                # Port attr present, but has no value: default to request port.
                # Cookie should then only be sent back on that port.
                port = request_port(request)
            else:
                port_specified = True
                port = re.sub(r"\s+", "", port)
        else:
            # No port attr present.  Cookie can be sent back on any port.
            port = None

        # set default expires and discard
        if expires is Absent:
            expires = None
            discard = True
        elif expires <= self._now:
            # Expiry date in past is request to delete cookie.  This can't be
            # in DefaultCookiePolicy, because can't delete cookies there.
            try:
                self.clear(domain, path, name)
            except KeyError:
                pass
            _debug("Expiring cookie, domain='%s', path='%s', name='%s'",
                   domain, path, name)
            return None

        return Cookie(version,
                      name, value,
                      port, port_specified,
                      domain, domain_specified, domain_initial_dot,
                      path, path_specified,
                      secure,
                      expires,
                      discard,
                      comment,
                      comment_url,
                      rest)

    def _cookies_from_attrs_set(self, attrs_set, request):
        cookie_tuples = self._normalized_cookie_tuples(attrs_set)

        cookies = []
        for tup in cookie_tuples:
            cookie = self._cookie_from_cookie_tuple(tup, request)
            if cookie: cookies.append(cookie)
        return cookies

    def _process_rfc2109_cookies(self, cookies):
        rfc2109_as_ns = getattr(self._policy, 'rfc2109_as_netscape', None)
        if rfc2109_as_ns is None:
            rfc2109_as_ns = not self._policy.rfc2965
        for cookie in cookies:
            if cookie.version == 1:
                cookie.rfc2109 = True
                if rfc2109_as_ns:
                    # treat 2109 cookies as Netscape cookies rather than
                    # as RFC2965 cookies
                    cookie.version = 0

    def make_cookies(self, response, request):
        """Return sequence of Cookie objects extracted from response object."""
        # get cookie-attributes for RFC 2965 and Netscape protocols
        headers = response.info()
        rfc2965_hdrs = headers.getheaders("Set-Cookie2")
        ns_hdrs = headers.getheaders("Set-Cookie")

        rfc2965 = self._policy.rfc2965
        netscape = self._policy.netscape

        if ((not rfc2965_hdrs and not ns_hdrs) or
            (not ns_hdrs and not rfc2965) or
            (not rfc2965_hdrs and not netscape) or
            (not netscape and not rfc2965)):
            return []  # no relevant cookie headers: quick exit

        try:
            cookies = self._cookies_from_attrs_set(
                split_header_words(rfc2965_hdrs), request)
        except Exception:
            _warn_unhandled_exception()
            cookies = []

        if ns_hdrs and netscape:
            try:
                # RFC 2109 and Netscape cookies
                ns_cookies = self._cookies_from_attrs_set(
                    parse_ns_headers(ns_hdrs), request)
            except Exception:
                _warn_unhandled_exception()
                ns_cookies = []
            self._process_rfc2109_cookies(ns_cookies)

            # Look for Netscape cookies (from Set-Cookie headers) that match
            # corresponding RFC 2965 cookies (from Set-Cookie2 headers).
            # For each match, keep the RFC 2965 cookie and ignore the Netscape
            # cookie (RFC 2965 section 9.1).  Actually, RFC 2109 cookies are
            # bundled in with the Netscape cookies for this purpose, which is
            # reasonable behaviour.
            if rfc2965:
                lookup = {}
                for cookie in cookies:
                    lookup[(cookie.domain, cookie.path, cookie.name)] = None

                def no_matching_rfc2965(ns_cookie, lookup=lookup):
                    key = ns_cookie.domain, ns_cookie.path, ns_cookie.name
                    return key not in lookup
                ns_cookies = filter(no_matching_rfc2965, ns_cookies)

            if ns_cookies:
                cookies.extend(ns_cookies)

        return cookies

    def set_cookie_if_ok(self, cookie, request):
        """Set a cookie if policy says it's OK to do so."""
        self._cookies_lock.acquire()
        try:
            self._policy._now = self._now = int(time.time())

            if self._policy.set_ok(cookie, request):
                self.set_cookie(cookie)


        finally:
            self._cookies_lock.release()

    def set_cookie(self, cookie):
        """Set a cookie, without checking whether or not it should be set."""
        c = self._cookies
        self._cookies_lock.acquire()
        try:
            if cookie.domain not in c: c[cookie.domain] = {}
            c2 = c[cookie.domain]
            if cookie.path not in c2: c2[cookie.path] = {}
            c3 = c2[cookie.path]
            c3[cookie.name] = cookie
        finally:
            self._cookies_lock.release()

    def extract_cookies(self, response, request):
        """Extract cookies from response, where allowable given the request."""
        _debug("extract_cookies: %s", response.info())
        self._cookies_lock.acquire()
        try:
            self._policy._now = self._now = int(time.time())

            for cookie in self.make_cookies(response, request):
                if self._policy.set_ok(cookie, request):
                    _debug(" setting cookie: %s", cookie)
                    self.set_cookie(cookie)
        finally:
            self._cookies_lock.release()

    def clear(self, domain=None, path=None, name=None):
        """Clear some cookies.

        Invoking this method without arguments will clear all cookies.  If
        given a single argument, only cookies belonging to that domain will be
        removed.  If given two arguments, cookies belonging to the specified
        path within that domain are removed.  If given three arguments, then
        the cookie with the specified name, path and domain is removed.

        Raises KeyError if no matching cookie exists.

        """
        if name is not None:
            if (domain is None) or (path is None):
                raise ValueError(
                    "domain and path must be given to remove a cookie by name")
            del self._cookies[domain][path][name]
        elif path is not None:
            if domain is None:
                raise ValueError(
                    "domain must be given to remove cookies by path")
            del self._cookies[domain][path]
        elif domain is not None:
            del self._cookies[domain]
        else:
            self._cookies = {}

    def clear_session_cookies(self):
        """Discard all session cookies.

        Note that the .save() method won't save session cookies anyway, unless
        you ask otherwise by passing a true ignore_discard argument.

        """
        self._cookies_lock.acquire()
        try:
            for cookie in self:
                if cookie.discard:
                    self.clear(cookie.domain, cookie.path, cookie.name)
        finally:
            self._cookies_lock.release()

    def clear_expired_cookies(self):
        """Discard all expired cookies.

        You probably don't need to call this method: expired cookies are never
        sent back to the server (provided you're using DefaultCookiePolicy),
        this method is called by CookieJar itself every so often, and the
        .save() method won't save expired cookies anyway (unless you ask
        otherwise by passing a true ignore_expires argument).

        """
        self._cookies_lock.acquire()
        try:
            now = time.time()
            for cookie in self:
                if cookie.is_expired(now):
                    self.clear(cookie.domain, cookie.path, cookie.name)
        finally:
            self._cookies_lock.release()

    def __iter__(self):
        return deepvalues(self._cookies)

    def __len__(self):
        """Return number of contained cookies."""
        i = 0
        for cookie in self: i = i + 1
        return i

    def __repr__(self):
        r = []
        for cookie in self: r.append(repr(cookie))
        return "<%s[%s]>" % (self.__class__.__name__, ", ".join(r))

    def __str__(self):
        r = []
        for cookie in self: r.append(str(cookie))
        return "<%s[%s]>" % (self.__class__.__name__, ", ".join(r))


# derives from IOError for backwards-compatibility with Python 2.4.0
class LoadError(IOError): pass

class FileCookieJar(CookieJar):
    """CookieJar that can be loaded from and saved to a file."""

    def __init__(self, filename=None, delayload=False, policy=None):
        """
        Cookies are NOT loaded from the named file until either the .load() or
        .revert() method is called.

        """
        CookieJar.__init__(self, policy)
        if filename is not None:
            try:
                filename+""
            except:
                raise ValueError("filename must be string-like")
        self.filename = filename
        self.delayload = bool(delayload)

    def save(self, filename=None, ignore_discard=False, ignore_expires=False):
        """Save cookies to a file."""
        raise NotImplementedError()

    def load(self, filename=None, ignore_discard=False, ignore_expires=False):
        """Load cookies from a file."""
        if filename is None:
            if self.filename is not None: filename = self.filename
            else: raise ValueError(MISSING_FILENAME_TEXT)

        f = open(filename)
        try:
            self._really_load(f, filename, ignore_discard, ignore_expires)
        finally:
            f.close()

    def revert(self, filename=None,
               ignore_discard=False, ignore_expires=False):
        """Clear all cookies and reload cookies from a saved file.

        Raises LoadError (or IOError) if reversion is not successful; the
        object's state will not be altered if this happens.

        """
        if filename is None:
            if self.filename is not None: filename = self.filename
            else: raise ValueError(MISSING_FILENAME_TEXT)

        self._cookies_lock.acquire()
        try:

            old_state = copy.deepcopy(self._cookies)
            self._cookies = {}
            try:
                self.load(filename, ignore_discard, ignore_expires)
            except (LoadError, IOError):
                self._cookies = old_state
                raise

        finally:
            self._cookies_lock.release()

from _LWPCookieJar import LWPCookieJar, lwp_cookie_str
from _MozillaCookieJar import MozillaCookieJar
"""Simple HTTP Server.

This module builds on BaseHTTPServer by implementing the standard GET
and HEAD requests in a fairly straightforward manner.

"""


__version__ = "0.6"

__all__ = ["SimpleHTTPRequestHandler"]

import os
import posixpath
import BaseHTTPServer
import urllib
import urlparse
import cgi
import sys
import shutil
import mimetypes
try:
    from cStringIO import StringIO
except ImportError:
    from StringIO import StringIO


class SimpleHTTPRequestHandler(BaseHTTPServer.BaseHTTPRequestHandler):

    """Simple HTTP request handler with GET and HEAD commands.

    This serves files from the current directory and any of its
    subdirectories.  The MIME type for files is determined by
    calling the .guess_type() method.

    The GET and HEAD requests are identical except that the HEAD
    request omits the actual contents of the file.

    """

    server_version = "SimpleHTTP/" + __version__

    def do_GET(self):
        """Serve a GET request."""
        f = self.send_head()
        if f:
            try:
                self.copyfile(f, self.wfile)
            finally:
                f.close()

    def do_HEAD(self):
        """Serve a HEAD request."""
        f = self.send_head()
        if f:
            f.close()

    def send_head(self):
        """Common code for GET and HEAD commands.

        This sends the response code and MIME headers.

        Return value is either a file object (which has to be copied
        to the outputfile by the caller unless the command was HEAD,
        and must be closed by the caller under all circumstances), or
        None, in which case the caller has nothing further to do.

        """
        path = self.translate_path(self.path)
        f = None
        if os.path.isdir(path):
            parts = urlparse.urlsplit(self.path)
            if not parts.path.endswith('/'):
                # redirect browser - doing basically what apache does
                self.send_response(301)
                new_parts = (parts[0], parts[1], parts[2] + '/',
                             parts[3], parts[4])
                new_url = urlparse.urlunsplit(new_parts)
                self.send_header("Location", new_url)
                self.end_headers()
                return None
            for index in "index.html", "index.htm":
                index = os.path.join(path, index)
                if os.path.exists(index):
                    path = index
                    break
            else:
                return self.list_directory(path)
        ctype = self.guess_type(path)
        try:
            # Always read in binary mode. Opening files in text mode may cause
            # newline translations, making the actual size of the content
            # transmitted *less* than the content-length!
            f = open(path, 'rb')
        except IOError:
            self.send_error(404, "File not found")
            return None
        try:
            self.send_response(200)
            self.send_header("Content-type", ctype)
            fs = os.fstat(f.fileno())
            self.send_header("Content-Length", str(fs[6]))
            self.send_header("Last-Modified", self.date_time_string(fs.st_mtime))
            self.end_headers()
            return f
        except:
            f.close()
            raise

    def list_directory(self, path):
        """Helper to produce a directory listing (absent index.html).

        Return value is either a file object, or None (indicating an
        error).  In either case, the headers are sent, making the
        interface the same as for send_head().

        """
        try:
            list = os.listdir(path)
        except os.error:
            self.send_error(404, "No permission to list directory")
            return None
        list.sort(key=lambda a: a.lower())
        f = StringIO()
        displaypath = cgi.escape(urllib.unquote(self.path))
        f.write('<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">')
        f.write("<html>\n<title>Directory listing for %s</title>\n" % displaypath)
        f.write("<body>\n<h2>Directory listing for %s</h2>\n" % displaypath)
        f.write("<hr>\n<ul>\n")
        for name in list:
            fullname = os.path.join(path, name)
            displayname = linkname = name
            # Append / for directories or @ for symbolic links
            if os.path.isdir(fullname):
                displayname = name + "/"
                linkname = name + "/"
            if os.path.islink(fullname):
                displayname = name + "@"
                # Note: a link to a directory displays with @ and links with /
            f.write('<li><a href="%s">%s</a>\n'
                    % (urllib.quote(linkname), cgi.escape(displayname)))
        f.write("</ul>\n<hr>\n</body>\n</html>\n")
        length = f.tell()
        f.seek(0)
        self.send_response(200)
        encoding = sys.getfilesystemencoding()
        self.send_header("Content-type", "text/html; charset=%s" % encoding)
        self.send_header("Content-Length", str(length))
        self.end_headers()
        return f

    def translate_path(self, path):
        """Translate a /-separated PATH to the local filename syntax.

        Components that mean special things to the local file system
        (e.g. drive or directory names) are ignored.  (XXX They should
        probably be diagnosed.)

        """
        # abandon query parameters
        path = path.split('?',1)[0]
        path = path.split('#',1)[0]
        # Don't forget explicit trailing slash when normalizing. Issue17324
        trailing_slash = path.rstrip().endswith('/')
        path = posixpath.normpath(urllib.unquote(path))
        words = path.split('/')
        words = filter(None, words)
        path = os.getcwd()
        for word in words:
            if os.path.dirname(word) or word in (os.curdir, os.pardir):
                # Ignore components that are not a simple file/directory name
                continue
            path = os.path.join(path, word)
        if trailing_slash:
            path += '/'
        return path

    def copyfile(self, source, outputfile):
        """Copy all data between two file objects.

        The SOURCE argument is a file object open for reading
        (or anything with a read() method) and the DESTINATION
        argument is a file object open for writing (or
        anything with a write() method).

        The only reason for overriding this would be to change
        the block size or perhaps to replace newlines by CRLF
        -- note however that this the default server uses this
        to copy binary data as well.

        """
        shutil.copyfileobj(source, outputfile)

    def guess_type(self, path):
        """Guess the type of a file.

        Argument is a PATH (a filename).

        Return value is a string of the form type/subtype,
        usable for a MIME Content-type header.

        The default implementation looks the file's extension
        up in the table self.extensions_map, using application/octet-stream
        as a default; however it would be permissible (if
        slow) to look inside the data to make a better guess.

        """

        base, ext = posixpath.splitext(path)
        if ext in self.extensions_map:
            return self.extensions_map[ext]
        ext = ext.lower()
        if ext in self.extensions_map:
            return self.extensions_map[ext]
        else:
            return self.extensions_map['']

    if not mimetypes.inited:
        mimetypes.init() # try to read system mime.types
    extensions_map = mimetypes.types_map.copy()
    extensions_map.update({
        '': 'application/octet-stream', # Default
        '.py': 'text/plain',
        '.c': 'text/plain',
        '.h': 'text/plain',
        })


def test(HandlerClass = SimpleHTTPRequestHandler,
         ServerClass = BaseHTTPServer.HTTPServer):
    BaseHTTPServer.test(HandlerClass, ServerClass)


if __name__ == '__main__':
    test()
�
zfc@s�dZddlZddlZddlZddlZddlZddlZddlZddlZddl	Z	ddl
Z
yddlZeZWne
k
r�ddlZnXyddlZWn#e
k
r�ejZd�ZnXd�Zd�Zd�ZdZejd�Zd	dd
��YZd�Zd�Zd
dd��YZd�Zd�Zd�Zd�Zddd��YZd�Zdd�Z!e"dkr�e!�ndS(s�program/module to trace Python program or function execution

Sample use, command line:
  trace.py -c -f counts --ignore-dir '$prefix' spam.py eggs
  trace.py -t --ignore-dir '$prefix' spam.py eggs
  trace.py --trackcalls spam.py eggs

Sample use, programmatically
  import sys

  # create a Trace object, telling it what to ignore, and whether to
  # do tracing or line-counting or both.
  tracer = trace.Trace(ignoredirs=[sys.prefix, sys.exec_prefix,], trace=0,
                    count=1)
  # run the new command using the given tracer
  tracer.run('main()')
  # make a report, placing output in /tmp
  r = tracer.results()
  r.write_results(show_missing=True, coverdir="/tmp")
i����NcCstjd�dS(N(tsystsettracetNone(((s/usr/lib64/python2.7/trace.pyt_unsettraceHscCstj|�tj|�dS(N(t	threadingRR(tfunc((s/usr/lib64/python2.7/trace.pyt	_settraceKs
cCstjd�tjd�dS(N(RRRR(((s/usr/lib64/python2.7/trace.pyROs
cCs|jdtjd�dS(Ns	Usage: %s [OPTIONS] <file> [ARGS]

Meta-options:
--help                Display this help then exit.
--version             Output version information then exit.

Otherwise, exactly one of the following three options must be given:
-t, --trace           Print each line to sys.stdout before it is executed.
-c, --count           Count the number of times each line is executed
                      and write the counts to <module>.cover for each
                      module executed, in the module's directory.
                      See also `--coverdir', `--file', `--no-report' below.
-l, --listfuncs       Keep track of which functions are executed at least
                      once and write the results to sys.stdout after the
                      program exits.
-T, --trackcalls      Keep track of caller/called pairs and write the
                      results to sys.stdout after the program exits.
-r, --report          Generate a report from a counts file; do not execute
                      any code.  `--file' must specify the results file to
                      read, which must have been created in a previous run
                      with `--count --file=FILE'.

Modifiers:
-f, --file=<file>     File to accumulate counts over several runs.
-R, --no-report       Do not generate the coverage report files.
                      Useful if you want to accumulate over several runs.
-C, --coverdir=<dir>  Directory where the report files.  The coverage
                      report for <package>.<module> is written to file
                      <dir>/<package>/<module>.cover.
-m, --missing         Annotate executable lines that were not executed
                      with '>>>>>> '.
-s, --summary         Write a brief summary on stdout for each file.
                      (Can only be used with --count or --report.)
-g, --timing          Prefix each line with the time since the program started.
                      Only used while tracing.

Filters, may be repeated multiple times:
--ignore-module=<mod> Ignore the given module(s) and its submodules
                      (if it is a package).  Accepts comma separated
                      list of module names
--ignore-dir=<dir>    Ignore files in the given directory (multiple
                      directories can be joined by os.pathsep).
i(twriteRtargv(toutfile((s/usr/lib64/python2.7/trace.pytusageSs*s#pragma NO COVERs^\s*(#.*)?$tIgnorecBs eZddd�Zd�ZRS(cCsM|p	g|_|pg|_ttjj|j�|_idd6|_dS(Nis<string>(t_modst_dirstmaptostpathtnormpatht_ignore(tselftmodulestdirs((s/usr/lib64/python2.7/trace.pyt__init__�scCs�||jkr|j|Sxk|jD]`}||krGd|j|<dSt|�}||| kr$||dkr$d|j|<dSq$W|dkr�d|j|<dSx8|jD]-}|j|tj�r�d|j|<dSq�Wd|j|<dS(Nit.i(RRtlenRR
t
startswithRtsep(Rtfilenamet
modulenametmodtntd((s/usr/lib64/python2.7/trace.pytnames�s&
 



N(t__name__t
__module__RRR (((s/usr/lib64/python2.7/trace.pyR�scCs.tjj|�}tjj|�\}}|S(s-Return a plausible module name for the patch.(RRtbasenametsplitext(RtbaseRtext((s/usr/lib64/python2.7/trace.pytmodname�scCs tjj|�}d}xotjD]d}tjj|�}|j|�r"|t|�tjkr"t|�t|�kr�|}q�q"q"W|r�|t|�d}n|}tjj|�\}}|jtjd�}tj	r�|jtj	d�}ntjj
|�\}}|jd�S(s,Return a plausible module name for the path.tiR(RRtnormcaseRRRRt
splitdrivetreplacetaltsepR$tlstrip(RtcomparepathtlongesttdirR%tdriveRR&((s/usr/lib64/python2.7/trace.pytfullmodname�s (	tCoverageResultscBsDeZdddddd�Zd�Zeedd�Zd�ZRS(cCsA||_|jdkr$i|_n|jj�|_||_|jdkrZi|_n|jj�|_||_|jdkr�i|_n|jj�|_||_||_|jr=yDtj	t
|jd��\}}}|j|j|||��Wq=t
ttfk
r9}tjd|j|fIJq=XndS(NtrbsSkipping counts file %r: %s(tcountsRtcopytcountertcalledfuncstcallerstinfileR	tpickletloadtopentupdatet	__class__tIOErrortEOFErrort
ValueErrorRtstderr(RR5R8R:R9R	terr((s/usr/lib64/python2.7/trace.pyR�s*						$ 
c	Cs�|j}|j}|j}|j}|j}|j}x2|j�D]$}|j|d�||||<qCWx|j�D]}d||<qxWx|j�D]}d||<q�WdS(s.Merge in the data from another CoverageResultsiiN(R5R8R9tkeystget(	RtotherR5R8R9tother_countstother_calledfuncst
other_callerstkey((s/usr/lib64/python2.7/trace.pyR>�s						"cCs�|jrWHdGH|jj�}|j�x,|D]!\}}}d|||fGHq/Wn|jrHdGH|jj�}|j�d}}	x�|D]�\\}
}}\}
}}|
|kr�HdG|
GdGH|
}d}	n|
|
kr|	|
krdG|
GH|
}	nd||||fGHq�Wni}xN|jj�D]=\}}|j|i�}||<|j||f||<q3Wi}x^|j�D]P\}}|dkr�q�n|jd	�r�q�n|jd�r�|d }n|dkrt
jjt
jj
|��}t|�}n4|}t
jj|�s7t
j|�nt|�}|rXt|�}ni}tj|�}t
jj||d
�}|j||||�\}}|r�|r�d||}||||f||<q�q�W|r9|r9|j�}|j�dGHx4|D])}||\}}}}d||GHq	Wn|jr�y5tj|j|j|jft|jd�d�Wq�tk
r�}tjd|IJq�XndS(s!
        @param coverdir
        sfunctions called:s*filename: %s, modulename: %s, funcname: %sscalling relationships:R(s***s  -->s    %s.%s -> %s.%ss<string>s	<doctest s.pycs.pyoi����s.coveridslines   cov%   module   (path)s%5d   %3d%%   %s   (%s)twbis"Can't save counts files because %sN(s.pycs.pyo(R8REtsortR9R5RFt	iteritemsRtendswithRRRtdirnametabspathR'texiststmakedirsR2tfind_executable_linenost	linecachetgetlinestjointwrite_results_fileR	R;tdumpR=R@RRC(Rtshow_missingtsummarytcoverdirtcallsRRtfuncnametlastfilet	lastcfiletpfiletpmodtpfunctcfiletcmodtcfunctper_filetlinenot	lines_hittsumstcountR0tlnotabtsourcet	coverpathtn_hitstn_linestpercenttmodstmRD((s/usr/lib64/python2.7/trace.pyt
write_results
s�	
	

%
			


	cCs<yt|d�}Wn+tk
r@}tjd||fIJd	SXd}d}x�t|�D]�\}	}
|	d}||kr�|jd||�|d7}|d7}nbtj|
�r�|jd�nC||kr�t||	kr�|jd�|d7}n
|jd�|j||	j	d��qZW|j
�||fS(
s'Return a coverage results file in path.tws3trace: Could not open %r for writing: %s - skippingiis%5d: s       s>>>>>> i(ii(R=R@RRCt	enumerateRtrx_blanktmatchtPRAGMA_NOCOVERt
expandtabstclose(RRtlinesRlRiR	RDRpRotitlineRh((s/usr/lib64/python2.7/trace.pyRXes.








N(	R!R"RRR>tTruetFalseRtRX(((s/usr/lib64/python2.7/trace.pyR3�s
		[cCsCi}x6tj|�D]%\}}||krd||<qqW|S(s:Return dict where keys are lines in the line number table.i(tdistfindlinestarts(tcodetstrstlinenost_Rh((s/usr/lib64/python2.7/trace.pytfind_lines_from_code�s
cCsOt||�}x9|jD].}tj|�r|jt||��qqW|S(s<Return lineno dict for all code objects reachable from code.(R�t	co_conststinspecttiscodeR>t
find_lines(R�R�R�tc((s/usr/lib64/python2.7/trace.pyR��s
cCs�i}tj}t|�}x�tj|j�D]\}}}}}|tjkr�|tjkr�|\}	}
|\}}x(t|	|d�D]}
d||
<q�Wq�n|}q.W|j�|S(s�Return a dict of possible docstring positions.

    The dict maps line numbers to strings.  There is an entry for
    line that contains only a string or a part of a triple-quoted
    string.
    i(	ttokentINDENTR=ttokenizetgenerate_tokenstreadlinetSTRINGtrangeR{(RRt
prev_ttypetftttypettstrtstarttendR~tslinetscoltelinetecolR}((s/usr/lib64/python2.7/trace.pytfind_strings�s	(

cCsryt|d�j�}Wn+tk
rF}tjd||fIJiSXt||d�}t|�}t||�S(sAReturn dict where keys are line numbers in the line number table.trUs%Not printing coverage data for %r: %stexec(R=treadR@RRCtcompileR�R�(RtprogRDR�R�((s/usr/lib64/python2.7/trace.pyRT�s

tTracec
Bs�eZdddddddded�	Zd�Zddd�Zd�Zd�Zd�Z	d�Z
d	�Zd
�Zd�Z
d�Zd
�ZRS(iic

Cs-||_||_t||�|_i|_i|_i|_d|_||_i|_	i|_
i|_d|_
|	r�tj�|_
n|r�|j|_n�|r�|j|_nr|r�|r�|j|_|j|_nK|r�|j|_|j|_n*|r |j|_|j|_n	d|_dS(sx
        @param count true iff it should count number of times each
                     line is executed
        @param trace true iff it should print out each line that is
                     being counted
        @param countfuncs true iff it should just output a list of
                     (filename, modulename, funcname,) for functions
                     that were called at least once;  This overrides
                     `count' and `trace'
        @param ignoremods a list of the names of modules to ignore
        @param ignoredirs a list of the names of directories to ignore
                     all of the (recursive) contents of
        @param infile file from which to read stored counts to be
                     added into the results
        @param outfile file in which to write the results
        @param timing true iff timing information be displayed
        iiN(R:R	RtignoreR5tblabbedtpathtobasenamet	donothingttracet_calledfuncst_callerst
_caller_cacheRt
start_timettimetglobaltrace_trackcallerstglobaltracetglobaltrace_countfuncstglobaltrace_lttlocaltrace_trace_and_countt
localtracetlocaltrace_tracetlocaltrace_count(
RRkR�t
countfuncstcountcallerst
ignoremodst
ignoredirsR:R	ttiming((s/usr/lib64/python2.7/trace.pyR�s8											cCs,ddl}|j}|j|||�dS(Ni����(t__main__t__dict__trunctx(RtcmdR�tdict((s/usr/lib64/python2.7/trace.pytrun�s	cBsl|dkri}n|dkr*i}n|jsCe|j�nz|||UWd|jsge�nXdS(N(RR�RR�R(RR�tglobalstlocals((s/usr/lib64/python2.7/trace.pyR��s				cOsVd}|js"tj|j�nz|||�}Wd|jsQtjd�nX|S(N(RR�RRR�(RRtargstkwtresult((s/usr/lib64/python2.7/trace.pytrunfuncs		c
Cs�|j}|j}|r't|�}nd}|j}d}||jkrq|j|dk	rr|j|}qrnd|j|<gtj|�D]}tj	|�r�|^q�}t
|�dkrrgtj|d�D]}	t|	t�r�|	^q�}
t
|
�dkrrgtj|
d�D]}t
|d�r|^q}t
|�dkro|dj}||j|<qoqrn|dk	r�d||f}n|||fS(Niit	__bases__s%s.%s(tf_codetco_filenameR'Rtco_nameR�tgct
get_referrersR�t
isfunctionRt
isinstanceR�thasattrR!(
RtframeR�RRR^tclsnameR�tfuncsRtdictsR�tclasses((s/usr/lib64/python2.7/trace.pytfile_module_function_ofs2			

cCsG|dkrC|j|�}|j|j�}d|j||f<ndS(skHandler for call events.

        Adds information about who called who to the self._callers dict.
        tcalliN(R�tf_backR�(RR�twhytargt	this_functparent_func((s/usr/lib64/python2.7/trace.pyR�:scCs/|dkr+|j|�}d|j|<ndS(soHandler for call events.

        Adds (filename, modulename, funcname) to the self._calledfuncs dict.
        R�iN(R�R�(RR�R�R�R�((s/usr/lib64/python2.7/trace.pyR�EscCs�|dkr�|j}|jjdd�}|r�t|�}|dk	r�|jj||�}|s�|jr�d||jfGHn|j	Sq�q�dSndS(s�Handler for call events.

        If the code block being entered is to be ignored, returns `None',
        else returns self.localtrace.
        R�t__file__s! --- modulename: %s, funcname: %sN(
R�t	f_globalsRFRR'R�R R�R�R�(RR�R�R�R�RRt	ignore_it((s/usr/lib64/python2.7/trace.pyR�Ns		
cCs�|dkr�|jj}|j}||f}|jj|d�d|j|<|jrndtj�|jGntjj	|�}d||t
j||�fGn|jS(NR~iis%.2fs
%s(%d): %s(
R�R�tf_linenoR5RFR�R�RRR#RUtgetlineR�(RR�R�R�RRhRKtbname((s/usr/lib64/python2.7/trace.pyR�es	 		cCs{|dkrt|jj}|j}|jrBdtj�|jGntjj|�}d||tj	||�fGn|j
S(NR~s%.2fs
%s(%d): %s(R�R�R�R�R�RRR#RUR�R�(RR�R�R�RRhR�((s/usr/lib64/python2.7/trace.pyR�ts			cCsW|dkrP|jj}|j}||f}|jj|d�d|j|<n|jS(NR~ii(R�R�R�R5RFR�(RR�R�R�RRhRK((s/usr/lib64/python2.7/trace.pyR��s	#c
Cs1t|jd|jd|jd|jd|j�S(NR:R	R8R9(R3R5R:R	R�R�(R((s/usr/lib64/python2.7/trace.pytresults�s		((N(R!R"RR�RR�R�R�R�R�R�R�R�R�R�R�(((s/usr/lib64/python2.7/trace.pyR��s1			)						
	cCs2tjjdtjd|f�tjd�dS(Ns%s: %s
ii(RRCRRtexit(tmsg((s/usr/lib64/python2.7/trace.pyt	_err_exit�s!cCs:ddl}|dkr$tj}nyP|j|dddddddd	d
ddd
dddddg�\}}Wn_|jk
r�}tjjdtjd|f�tjjdtjd�tjd�nXd}d}d}d}d}	d}
g}g}d}
d}t}t}t}x�|D]�\}}|dkr`t	tj
�tjd�n|dkr�tj
jd�tjd�n|dks�|dkr�t}q+n|dks�|dkr�t}q+n|dks�|dkr�t}q+n|dks|d krd}q+n|d!ks4|d"kr@d}q+n|d#ksX|d$krdd}q+n|d%ks||d&kr�d}q+n|d'ks�|d(kr�|}	q+n|d)ks�|d*kr�d}
q+n|d+ks�|d,kr�|}
q+n|d-ks|d.krd}q+n|d/krWx*|jd0�D]}|j
|j��q4Wq+n|d1kr+x�|jtj�D]�}tjj|�}|jd2tjjtjd3d4tjd5 ��}|jd6tjjtjd3d4tjd5 ��}tjj|�}|j
|�qvWq+q+q+W|r>|s1|r>td7�n|pY|pY|pY|pY|sitd8�n|r�|r�td9�n|r�|	r�td:�n|r�t|�dkr�td;�n|r�td<|	d=|	�}|j|
d
|d>|
�n>|t_|d}tjj|�dtjd<t||d?|d@|dA|dB|d<|	d=|	d|�}yft|��}t|j �|dC�}WdQXi|dD6dEdF6ddG6ddH6}|j!|||�Wn>t"k
r�}tdItjd|f�nt#k
rnX|j$�}|s6|j|
d
|d>|
�ndS(JNi����istcrRf:d:msC:lTgthelptversionR�Rktreports	no-reportR[sfile=tmissingsignore-module=signore-dir=s	coverdir=t	listfuncst
trackcallsR�s%s: %s
is%Try `%s --help' for more information
s--helps	--versions
trace 2.0
s-Ts--trackcallss-ls--listfuncss-gs--timings-ts--traces-cs--counts-rs--reports-Rs--no-reports-fs--files-ms	--missings-Cs
--coverdirs-ss	--summarys--ignore-modulet,s--ignore-dirs$prefixtlibtpythonis$exec_prefixs8cannot specify both --listfuncs and (--trace or --count)sLmust specify one of --trace, --count, --report, --listfuncs, or --trackcallss,cannot specify both --report and --no-reports--report requires a --filesmissing name of file to runR:R	R\R�R�R�R�R�R�R�R!t__package__t
__cached__sCannot run file %r because: %s(%tgetoptRRRterrorRCRR�R�R
tstdoutRtsplittappendtstripRtpathsepRt
expandvarsR+RWtprefixR�texec_prefixRR�RR3RtR�R=R�R�R�R@t
SystemExitR�(RR�toptst	prog_argvR�R�RkR�t	no_reporttcounts_fileR�tignore_modulestignore_dirsR\R[R�R�R�topttvalRtsR�tprognametttfpR�tglobsRD((s/usr/lib64/python2.7/trace.pytmain�s�	!
		





	


R�((((#t__doc__RURtreRR�R�R�R�R�R�tcPickleR;tImportErrorRRRRR
RyR�RwRR'R2R3R�R�R�RTR�R�RR
R!(((s/usr/lib64/python2.7/trace.pyt<module>1sL


	
			-5		�	
			�	�"""Helper class to quickly write a loop over all standard input files.

Typical use is:

    import fileinput
    for line in fileinput.input():
        process(line)

This iterates over the lines of all files listed in sys.argv[1:],
defaulting to sys.stdin if the list is empty.  If a filename is '-' it
is also replaced by sys.stdin.  To specify an alternative list of
filenames, pass it as the argument to input().  A single file name is
also allowed.

Functions filename(), lineno() return the filename and cumulative line
number of the line that has just been read; filelineno() returns its
line number in the current file; isfirstline() returns true iff the
line just read is the first line of its file; isstdin() returns true
iff the line was read from sys.stdin.  Function nextfile() closes the
current file so that the next iteration will read the first line from
the next file (if any); lines not read from the file will not count
towards the cumulative line count; the filename is not changed until
after the first line of the next file has been read.  Function close()
closes the sequence.

Before any lines have been read, filename() returns None and both line
numbers are zero; nextfile() has no effect.  After all lines have been
read, filename() and the line number functions return the values
pertaining to the last line read; nextfile() has no effect.

All files are opened in text mode by default, you can override this by
setting the mode parameter to input() or FileInput.__init__().
If an I/O error occurs during opening or reading a file, the IOError
exception is raised.

If sys.stdin is used more than once, the second and further use will
return no lines, except perhaps for interactive use, or if it has been
explicitly reset (e.g. using sys.stdin.seek(0)).

Empty files are opened and immediately closed; the only time their
presence in the list of filenames is noticeable at all is when the
last file opened is empty.

It is possible that the last line of a file doesn't end in a newline
character; otherwise lines are returned including the trailing
newline.

Class FileInput is the implementation; its methods filename(),
lineno(), fileline(), isfirstline(), isstdin(), nextfile() and close()
correspond to the functions in the module.  In addition it has a
readline() method which returns the next input line, and a
__getitem__() method which implements the sequence behavior.  The
sequence must be accessed in strictly sequential order; sequence
access and readline() cannot be mixed.

Optional in-place filtering: if the keyword argument inplace=1 is
passed to input() or to the FileInput constructor, the file is moved
to a backup file and standard output is directed to the input file.
This makes it possible to write a filter that rewrites its input file
in place.  If the keyword argument backup=".<some extension>" is also
given, it specifies the extension for the backup file, and the backup
file remains around; by default, the extension is ".bak" and it is
deleted when the output file is closed.  In-place filtering is
disabled when standard input is read.  XXX The current implementation
does not work for MS-DOS 8+3 filesystems.

XXX Possible additions:

- optional getopt argument processing
- isatty()
- read(), read(size), even readlines()

"""

import sys, os

__all__ = ["input","close","nextfile","filename","lineno","filelineno",
           "isfirstline","isstdin","FileInput"]

_state = None

# No longer used
DEFAULT_BUFSIZE = 8*1024

def input(files=None, inplace=0, backup="", bufsize=0,
          mode="r", openhook=None):
    """Return an instance of the FileInput class, which can be iterated.

    The parameters are passed to the constructor of the FileInput class.
    The returned instance, in addition to being an iterator,
    keeps global state for the functions of this module,.
    """
    global _state
    if _state and _state._file:
        raise RuntimeError, "input() already active"
    _state = FileInput(files, inplace, backup, bufsize, mode, openhook)
    return _state

def close():
    """Close the sequence."""
    global _state
    state = _state
    _state = None
    if state:
        state.close()

def nextfile():
    """
    Close the current file so that the next iteration will read the first
    line from the next file (if any); lines not read from the file will
    not count towards the cumulative line count. The filename is not
    changed until after the first line of the next file has been read.
    Before the first line has been read, this function has no effect;
    it cannot be used to skip the first file. After the last line of the
    last file has been read, this function has no effect.
    """
    if not _state:
        raise RuntimeError, "no active input()"
    return _state.nextfile()

def filename():
    """
    Return the name of the file currently being read.
    Before the first line has been read, returns None.
    """
    if not _state:
        raise RuntimeError, "no active input()"
    return _state.filename()

def lineno():
    """
    Return the cumulative line number of the line that has just been read.
    Before the first line has been read, returns 0. After the last line
    of the last file has been read, returns the line number of that line.
    """
    if not _state:
        raise RuntimeError, "no active input()"
    return _state.lineno()

def filelineno():
    """
    Return the line number in the current file. Before the first line
    has been read, returns 0. After the last line of the last file has
    been read, returns the line number of that line within the file.
    """
    if not _state:
        raise RuntimeError, "no active input()"
    return _state.filelineno()

def fileno():
    """
    Return the file number of the current file. When no file is currently
    opened, returns -1.
    """
    if not _state:
        raise RuntimeError, "no active input()"
    return _state.fileno()

def isfirstline():
    """
    Returns true the line just read is the first line of its file,
    otherwise returns false.
    """
    if not _state:
        raise RuntimeError, "no active input()"
    return _state.isfirstline()

def isstdin():
    """
    Returns true if the last line was read from sys.stdin,
    otherwise returns false.
    """
    if not _state:
        raise RuntimeError, "no active input()"
    return _state.isstdin()

class FileInput:
    """FileInput([files[, inplace[, backup[, bufsize[, mode[, openhook]]]]]])

    Class FileInput is the implementation of the module; its methods
    filename(), lineno(), fileline(), isfirstline(), isstdin(), fileno(),
    nextfile() and close() correspond to the functions of the same name
    in the module.
    In addition it has a readline() method which returns the next
    input line, and a __getitem__() method which implements the
    sequence behavior. The sequence must be accessed in strictly
    sequential order; random access and readline() cannot be mixed.
    """

    def __init__(self, files=None, inplace=0, backup="", bufsize=0,
                 mode="r", openhook=None):
        if isinstance(files, basestring):
            files = (files,)
        else:
            if files is None:
                files = sys.argv[1:]
            if not files:
                files = ('-',)
            else:
                files = tuple(files)
        self._files = files
        self._inplace = inplace
        self._backup = backup
        self._savestdout = None
        self._output = None
        self._filename = None
        self._startlineno = 0
        self._filelineno = 0
        self._file = None
        self._isstdin = False
        self._backupfilename = None
        # restrict mode argument to reading modes
        if mode not in ('r', 'rU', 'U', 'rb'):
            raise ValueError("FileInput opening mode must be one of "
                             "'r', 'rU', 'U' and 'rb'")
        self._mode = mode
        if inplace and openhook:
            raise ValueError("FileInput cannot use an opening hook in inplace mode")
        elif openhook and not hasattr(openhook, '__call__'):
            raise ValueError("FileInput openhook must be callable")
        self._openhook = openhook

    def __del__(self):
        self.close()

    def close(self):
        try:
            self.nextfile()
        finally:
            self._files = ()

    def __iter__(self):
        return self

    def next(self):
        while 1:
            line = self._readline()
            if line:
                self._filelineno += 1
                return line
            if not self._file:
                raise StopIteration
            self.nextfile()
            # repeat with next file

    def __getitem__(self, i):
        if i != self.lineno():
            raise RuntimeError, "accessing lines out of order"
        try:
            return self.next()
        except StopIteration:
            raise IndexError, "end of input reached"

    def nextfile(self):
        savestdout = self._savestdout
        self._savestdout = 0
        if savestdout:
            sys.stdout = savestdout

        output = self._output
        self._output = 0
        try:
            if output:
                output.close()
        finally:
            file = self._file
            self._file = None
            try:
                del self._readline  # restore FileInput._readline
            except AttributeError:
                pass
            try:
                if file and not self._isstdin:
                    file.close()
            finally:
                backupfilename = self._backupfilename
                self._backupfilename = 0
                if backupfilename and not self._backup:
                    try: os.unlink(backupfilename)
                    except OSError: pass

                self._isstdin = False

    def readline(self):
        while 1:
            line = self._readline()
            if line:
                self._filelineno += 1
                return line
            if not self._file:
                return line
            self.nextfile()
            # repeat with next file

    def _readline(self):
        if not self._files:
            return ""
        self._filename = self._files[0]
        self._files = self._files[1:]
        self._startlineno = self.lineno()
        self._filelineno = 0
        self._file = None
        self._isstdin = False
        self._backupfilename = 0
        if self._filename == '-':
            self._filename = '<stdin>'
            self._file = sys.stdin
            self._isstdin = True
        else:
            if self._inplace:
                self._backupfilename = (
                    self._filename + (self._backup or os.extsep+"bak"))
                try: os.unlink(self._backupfilename)
                except os.error: pass
                # The next few lines may raise IOError
                os.rename(self._filename, self._backupfilename)
                self._file = open(self._backupfilename, self._mode)
                try:
                    perm = os.fstat(self._file.fileno()).st_mode
                except OSError:
                    self._output = open(self._filename, "w")
                else:
                    fd = os.open(self._filename,
                                    os.O_CREAT | os.O_WRONLY | os.O_TRUNC,
                                    perm)
                    self._output = os.fdopen(fd, "w")
                    try:
                        if hasattr(os, 'chmod'):
                            os.chmod(self._filename, perm)
                    except OSError:
                        pass
                self._savestdout = sys.stdout
                sys.stdout = self._output
            else:
                # This may raise IOError
                if self._openhook:
                    self._file = self._openhook(self._filename, self._mode)
                else:
                    self._file = open(self._filename, self._mode)

        self._readline = self._file.readline  # hide FileInput._readline
        return self._readline()

    def filename(self):
        return self._filename

    def lineno(self):
        return self._startlineno + self._filelineno

    def filelineno(self):
        return self._filelineno

    def fileno(self):
        if self._file:
            try:
                return self._file.fileno()
            except ValueError:
                return -1
        else:
            return -1

    def isfirstline(self):
        return self._filelineno == 1

    def isstdin(self):
        return self._isstdin


def hook_compressed(filename, mode):
    ext = os.path.splitext(filename)[1]
    if ext == '.gz':
        import gzip
        return gzip.open(filename, mode)
    elif ext == '.bz2':
        import bz2
        return bz2.BZ2File(filename, mode)
    else:
        return open(filename, mode)


def hook_encoded(encoding):
    import io
    def openhook(filename, mode):
        mode = mode.replace('U', '').replace('b', '') or 'r'
        return io.open(filename, mode, encoding=encoding, newline='')
    return openhook


def _test():
    import getopt
    inplace = 0
    backup = 0
    opts, args = getopt.getopt(sys.argv[1:], "ib:")
    for o, a in opts:
        if o == '-i': inplace = 1
        if o == '-b': backup = a
    for line in input(args, inplace=inplace, backup=backup):
        if line[-1:] == '\n': line = line[:-1]
        if line[-1:] == '\r': line = line[:-1]
        print "%d: %s[%d]%s %s" % (lineno(), filename(), filelineno(),
                                   isfirstline() and "*" or "", line)
    print "%d: %s[%d]" % (lineno(), filename(), filelineno())

if __name__ == '__main__':
    _test()
�
zfc@sddlZejd�dS(i����Nshttp://xkcd.com/353/(t
webbrowsertopen(((s#/usr/lib64/python2.7/antigravity.pyt<module>s"""Generic interface to all dbm clones.

Instead of

        import dbm
        d = dbm.open(file, 'w', 0666)

use

        import anydbm
        d = anydbm.open(file, 'w')

The returned object is a dbhash, gdbm, dbm or dumbdbm object,
dependent on the type of database being opened (determined by whichdb
module) in the case of an existing dbm. If the dbm does not exist and
the create or new flag ('c' or 'n') was specified, the dbm type will
be determined by the availability of the modules (tested in the above
order).

It has the following interface (key and data are strings):

        d[key] = data   # store data at key (may override data at
                        # existing key)
        data = d[key]   # retrieve data at key (raise KeyError if no
                        # such key)
        del d[key]      # delete data stored at key (raises KeyError
                        # if no such key)
        flag = key in d   # true if the key exists
        list = d.keys() # return a list of all existing keys (slow!)

Future versions may change the order in which implementations are
tested for existence, and add interfaces to other dbm-like
implementations.
"""

class error(Exception):
    pass

_names = ['dbhash', 'gdbm', 'dbm', 'dumbdbm']
_errors = [error]
_defaultmod = None

for _name in _names:
    try:
        _mod = __import__(_name)
    except ImportError:
        continue
    if not _defaultmod:
        _defaultmod = _mod
    _errors.append(_mod.error)

if not _defaultmod:
    raise ImportError, "no dbm clone found; tried %s" % _names

error = tuple(_errors)

def open(file, flag='r', mode=0666):
    """Open or create database at path given by *file*.

    Optional argument *flag* can be 'r' (default) for read-only access, 'w'
    for read-write access of an existing database, 'c' for read-write access
    to a new or existing database, and 'n' for read-write access to a new
    database.

    Note: 'r' and 'w' fail if the database doesn't exist; 'c' creates it
    only if it doesn't exist; and 'n' always creates a new database.
    """

    # guess the type of an existing database
    from whichdb import whichdb
    result=whichdb(file)
    if result is None:
        # db doesn't exist
        if 'c' in flag or 'n' in flag:
            # file doesn't exist and the new
            # flag was used so use default type
            mod = _defaultmod
        else:
            raise error, "need 'c' or 'n' flag to open new db"
    elif result == "":
        # db type cannot be determined
        raise error, "db type could not be determined"
    else:
        mod = __import__(result)
    return mod.open(file, flag, mode)
�
zfc@s�dZddlZgejD]Zeee�^qZdddgZdZd�Zd�Z	d	d
d�Z
ddd��YZddd
��YZdS(s[Utilities to compile possibly incomplete Python source code.

This module provides two interfaces, broadly similar to the builtin
function compile(), which take program text, a filename and a 'mode'
and:

- Return code object if the command is complete and valid
- Return None if the command is incomplete
- Raise SyntaxError, ValueError or OverflowError if the command is a
  syntax error (OverflowError and ValueError can be produced by
  malformed literals).

Approach:

First, check if the source consists entirely of blank lines and
comments; if so, replace it with 'pass', because the built-in
parser doesn't always do the right thing for these.

Compile three times: as is, with \n, and with \n\n appended.  If it
compiles as is, it's complete.  If it compiles with one \n appended,
we expect more.  If it doesn't compile either way, we compare the
error we get when compiling with \n or \n\n appended.  If the errors
are the same, the code is broken.  But if the errors are different, we
expect more.  Not intuitive; not even guaranteed to hold in future
releases; but this matches the compiler's behavior from Python 1.4
through 2.2, at least.

Caveat:

It is possible (but not likely) that the parser stops parsing with a
successful outcome before reaching the end of the source; in this
case, trailing symbols may be ignored instead of causing an error.
For example, a backslash followed by two newlines may be followed by
arbitrary garbage.  This will be fixed once the API for the parser is
better.

The two interfaces are:

compile_command(source, filename, symbol):

    Compiles a single command in the manner described above.

CommandCompiler():

    Instances of this class have __call__ methods identical in
    signature to compile_command; the difference is that if the
    instance compiles program text containing a __future__ statement,
    the instance 'remembers' and compiles all subsequent program texts
    with the statement in force.

The module also provides another class:

Compile():

    Instances of this class act like the built-in function compile,
    but with 'memory' in the sense described above.
i����Ntcompile_commandtCompiletCommandCompilericCs6xR|jd�D],}|j�}|r|ddkrPqqW|dkrUd}nd}}}d}}	}
y||||�}Wntk
r�}nXy||d||�}	Wntk
r�}nXy||d||�}
Wntk
r�}nX|r|S|	r2t|�t|�kr2t|�ndS(Ns
it#tevaltpasss

(tsplittstriptNonetSyntaxErrortrepr(tcompilertsourcetfilenametsymboltlineterrterr1terr2tcodetcode1tcode2((s/usr/lib64/python2.7/codeop.pyt_maybe_compileDs0	cCst|||t�S(N(tcompiletPyCF_DONT_IMPLY_DEDENT(RR
R((s/usr/lib64/python2.7/codeop.pyt_compileess<input>tsinglecCstt|||�S(ssCompile a command and determine whether it is incomplete.

    Arguments:

    source -- the source string; may contain \n characters
    filename -- optional filename from which source was read; default
                "<input>"
    symbol -- optional grammar start symbol; "single" (default) or "eval"

    Return value / exceptions raised:

    - Return a code object if the command is complete and valid
    - Return None if the command is incomplete
    - Raise SyntaxError, ValueError or OverflowError if the command is a
      syntax error (OverflowError and ValueError can be produced by
      malformed literals).
    (RR(RR
R((s/usr/lib64/python2.7/codeop.pyRhscBs eZdZd�Zd�ZRS(s�Instances of this class behave much like the built-in compile
    function, but if one is used to compile text containing a future
    statement, it "remembers" and compiles all subsequent program texts
    with the statement in force.cCs
t|_dS(N(Rtflags(tself((s/usr/lib64/python2.7/codeop.pyt__init__�scCsUt||||jd�}x3tD]+}|j|j@r"|j|jO_q"q"W|S(Ni(RRt	_featurestco_flagst
compiler_flag(RRR
Rtcodeobtfeature((s/usr/lib64/python2.7/codeop.pyt__call__�s

(t__name__t
__module__t__doc__RR#(((s/usr/lib64/python2.7/codeop.pyR|s	cBs&eZdZd�Zddd�ZRS(s(Instances of this class have __call__ methods identical in
    signature to compile_command; the difference is that if the
    instance compiles program text containing a __future__ statement,
    the instance 'remembers' and compiles all subsequent program texts
    with the statement in force.cCst�|_dS(N(RR(R((s/usr/lib64/python2.7/codeop.pyR�ss<input>RcCst|j|||�S(s�Compile a command and determine whether it is incomplete.

        Arguments:

        source -- the source string; may contain \n characters
        filename -- optional filename from which source was read;
                    default "<input>"
        symbol -- optional grammar start symbol; "single" (default) or
                  "eval"

        Return value / exceptions raised:

        - Return a code object if the command is complete and valid
        - Return None if the command is incomplete
        - Raise SyntaxError, ValueError or OverflowError if the command is a
          syntax error (OverflowError and ValueError can be produced by
          malformed literals).
        (RR(RRR
R((s/usr/lib64/python2.7/codeop.pyR#�s(R$R%R&RR#(((s/usr/lib64/python2.7/codeop.pyR�s	(((
R&t
__future__tall_feature_namestfnametgetattrRt__all__RRRRRR(((s/usr/lib64/python2.7/codeop.pyt<module>9s"	!	�
zfc@s3dZddlZddlZddlZddlmZddlZddlZddlZyddl	Z	[	e
ZWnek
r�e
ZnXyddlZ[e
ZWnek
r�e
ZnXyddlmZWnek
r�dZnXyddlmZWnek
rdZnXdddd	d
ddd
dddddddddgZdefd��YZdefd��YZdefd��YZyeWnek
r�dZnXd?d�Zd�Zd�Zd�Zd �Z d!�Z!d"�Z"d#�Z#e
dd$�Z$e
dd%�Z%d&�Z&d'�Z'd(�Z(d)�Z)d*�Z*d+d,d,dddd-�Z+d.�Z,d,d,dd/�Z-ie+d@gd1fd26e-gd3fd46Z.er�e+dAgd5fe.d6<ner�e+dBgd8fe.d9<nd:�Z/dd;d<�Z0d=�Z1ddd,d,dddd>�Z2dS(Cs�Utility functions for copying and archiving files and directory trees.

XXX The functions here don't copy the resource fork or other metadata on Mac.

i����N(tabspath(tgetpwnam(tgetgrnamtcopyfileobjtcopyfiletcopymodetcopystattcopytcopy2tcopytreetmovetrmtreetErrortSpecialFileErrort	ExecErrortmake_archivetget_archive_formatstregister_archive_formattunregister_archive_formattignore_patternscBseZRS((t__name__t
__module__(((s/usr/lib64/python2.7/shutil.pyR-scBseZdZRS(s|Raised when trying to do a kind of operation (e.g. copying) which is
    not supported on a special file (e.g. a named pipe)(RRt__doc__(((s/usr/lib64/python2.7/shutil.pyR
0scBseZdZRS(s+Raised when a command could not be executed(RRR(((s/usr/lib64/python2.7/shutil.pyR4siicCs1x*|j|�}|sPn|j|�qWdS(s=copy data from file-like object fsrc to file-like object fdstN(treadtwrite(tfsrctfdsttlengthtbuf((s/usr/lib64/python2.7/shutil.pyR<s
cCs{ttjd�rAytjj||�SWqAtk
r=tSXntjjtjj|��tjjtjj|��kS(Ntsamefile(thasattrtostpathRtOSErrortFalsetnormcaseR(tsrctdst((s/usr/lib64/python2.7/shutil.pyt	_samefileDs
cCs�t||�r(td||f��nx`||gD]R}ytj|�}Wntk
raq5Xtj|j�r5td|��q5q5Wt|d��,}t|d��}t	||�WdQXWdQXdS(sCopy data from src to dsts`%s` and `%s` are the same files`%s` is a named pipetrbtwbN(
R&RRtstatR!tS_ISFIFOtst_modeR
topenR(R$R%tfntstRR((s/usr/lib64/python2.7/shutil.pyRPs
cCsGttd�rCtj|�}tj|j�}tj||�ndS(sCopy mode bits from src to dsttchmodN(RRR)tS_IMODER+R/(R$R%R.tmode((s/usr/lib64/python2.7/shutil.pyRdscCstj|�}tj|j�}ttd�rOtj||j|jf�nttd�rqtj||�nttd�r�t|d�r�ytj	||j
�Wq�tk
r�}x@dD]1}tt|�r�|jt
t|�kr�Pq�q�W�q�XndS(	s;Copy file metadata

    Copy the permission bits, last access time, last modification time, and
    flags from `src` to `dst`. On Linux, copystat() also copies the "extended
    attributes" where possible. The file contents, owner, and group are
    unaffected. `src` and `dst` are path names given as strings.
    tutimeR/tchflagstst_flagst
EOPNOTSUPPtENOTSUPN(R5R6(RR)R0R+RR2tst_atimetst_mtimeR/R3R4R!terrnotgetattr(R$R%R.R1twhyterr((s/usr/lib64/python2.7/shutil.pyRks
'cCsTtjj|�r6tjj|tjj|��}nt||�t||�dS(sVCopy data and mode bits ("cp src dst").

    The destination may be a directory.

    N(RR tisdirtjointbasenameRR(R$R%((s/usr/lib64/python2.7/shutil.pyR�s$
cCsTtjj|�r6tjj|tjj|��}nt||�t||�dS(s�Copy data and metadata. Return the file's destination.

    Metadata is copied with copystat(). Please see the copystat function
    for more information.

    The destination may be a directory.

    N(RR R=R>R?RR(R$R%((s/usr/lib64/python2.7/shutil.pyR�s	$
cs�fd�}|S(s�Function that can be used as copytree() ignore parameter.

    Patterns is a sequence of glob-style patterns
    that are used to exclude filescs:g}x'�D]}|jtj||��q
Wt|�S(N(textendtfnmatchtfiltertset(R tnamest
ignored_namestpattern(tpatterns(s/usr/lib64/python2.7/shutil.pyt_ignore_patterns�s
((RGRH((RGs/usr/lib64/python2.7/shutil.pyR�sc
Cs�tj|�}|dk	r-|||�}n	t�}tj|�g}x|D]
}||krhqPntjj||�}tjj||�}	ys|r�tjj|�r�tj|�}
tj	|
|	�n5tjj
|�r�t||	||�n
t||	�WqPt
k
r.}|j|jd�qPtk
r\}|j||	t|�f�qPXqPWyt||�WnMtk
r�}tdk	r�t|t�r�q�|j||t|�f�nX|r�t
|�ndS(s�Recursively copy a directory tree using copy2().

    The destination directory must not already exist.
    If exception(s) occur, an Error is raised with a list of reasons.

    If the optional symlinks flag is true, symbolic links in the
    source tree result in symbolic links in the destination tree; if
    it is false, the contents of the files pointed to by symbolic
    links are copied.

    The optional ignore argument is a callable. If given, it
    is called with the `src` parameter, which is the directory
    being visited by copytree(), and `names` which is the list of
    `src` contents, as returned by os.listdir():

        callable(src, names) -> ignored_names

    Since copytree() is called recursively, the callable will be
    called once for each directory that is copied. It returns a
    list of names relative to the `src` directory that should
    not be copied.

    XXX Consider this example code rather than the ultimate tool.

    iN(RtlistdirtNoneRCtmakedirsR R>tislinktreadlinktsymlinkR=R	RRR@targstEnvironmentErrortappendtstrRR!tWindowsErrort
isinstance(
R$R%tsymlinkstignoreRDREterrorstnametsrcnametdstnametlinktoR<R;((s/usr/lib64/python2.7/shutil.pyR	�s<	

$ cCs�|rd�}n|dkr*d�}ny%tjj|�rNtd��nWn.tk
r|tjj|tj��dSXg}ytj|�}Wn/tjk
r�}|tj|tj��nXx�|D]�}tjj	||�}ytj
|�j}Wntjk
rd}nXtj
|�rBt|||�q�ytj|�Wq�tjk
r�}|tj|tj��q�Xq�Wytj|�Wn-tjk
r�|tj|tj��nXdS(s�Recursively delete a directory tree.

    If ignore_errors is set, errors are ignored; otherwise, if onerror
    is set, it is called to handle the error with arguments (func,
    path, exc_info) where func is os.listdir, os.remove, or os.rmdir;
    path is the argument to that function that caused it to fail; and
    exc_info is a tuple returned by sys.exc_info().  If ignore_errors
    is false and onerror is None, an exception is raised.

    cWsdS(N((RO((s/usr/lib64/python2.7/shutil.pytonerror�scWs�dS(N((RO((s/usr/lib64/python2.7/shutil.pyR\�ss%Cannot call rmtree on a symbolic linkNi(RJRR RLR!tsystexc_infoRIterrorR>tlstatR+R)tS_ISDIRRtremovetrmdir(R t
ignore_errorsR\RDR<RXtfullnameR1((s/usr/lib64/python2.7/shutil.pyR�s>


!cCs5tjjtjjpd}tjj|j|��S(Nt(RR tseptaltsepR?trstrip(R Rg((s/usr/lib64/python2.7/shutil.pyt	_basenamescCs|}tjj|�r{t||�r;tj||�dStjj|t|��}tjj|�r{td|�q{nytj||�Wn�t	k
rtjj|�r�t
||�r�td||f�nt||dt�t
|�qt||�tj|�nXdS(s�Recursively move a file or directory to another location. This is
    similar to the Unix "mv" command.

    If the destination is a directory or a symlink to a directory, the source
    is moved inside the directory. The destination path must not already
    exist.

    If the destination already exists but is not a directory, it may be
    overwritten depending on os.rename() semantics.

    If the destination is on our current filesystem, then rename() is used.
    Otherwise, src is copied to the destination and then removed.
    A lot more could be done here...  A look at a mv.c shows a lot of
    the issues this implementation glosses over.

    Ns$Destination path '%s' already existss.Cannot move a directory '%s' into itself '%s'.RU(RR R=R&trenameR>RjtexistsRR!t
_destinsrcR	tTrueRRtunlink(R$R%treal_dst((s/usr/lib64/python2.7/shutil.pyR
 s$


cCsut|�}t|�}|jtjj�s@|tjj7}n|jtjj�sh|tjj7}n|j|�S(N(RtendswithRR Rgt
startswith(R$R%((s/usr/lib64/python2.7/shutil.pyRmHscCs^tdks|dkrdSyt|�}Wntk
rEd}nX|dk	rZ|dSdS(s"Returns a gid, given a group name.iN(RRJtKeyError(RXtresult((s/usr/lib64/python2.7/shutil.pyt_get_gidQs

cCs^tdks|dkrdSyt|�}Wntk
rEd}nX|dk	rZ|dSdS(s"Returns an uid, given a user name.iN(RRJRs(RXRt((s/usr/lib64/python2.7/shutil.pyt_get_uid]s

tgzipics�|dkrd}nKtr0|dkr0d}n0trK|dkrKd}ntdj|���|rpd|nd}	|d|	}
tjj|
�}|r�tjj|�r�|dk	r�|j	d	|�n|s�tj
|�q�nd
dl}|dk	r|j	d�nt���t
�������fd
�}
|s�|j|
d|�}z|j|d|
�Wd|j�Xn|
S(s�Create a (possibly compressed) tar file from all the files under
    'base_dir'.

    'compress' must be "gzip" (the default), "bzip2", or None.

    'owner' and 'group' can be used to define an owner and a group for the
    archive that is being built. If not provided, the current owner and group
    will be used.

    The output tar file will be named 'base_name' +  ".tar", possibly plus
    the appropriate compression extension (".gz", or ".bz2").

    Returns the output filename.
    RfRwtgztbzip2tbz2sCbad value for 'compress', or compression format not supported : {0}t.s.tarscreating %si����NsCreating tar archivecsF�dk	r!�|_�|_n�dk	rB�|_�|_n|S(N(RJtgidtgnametuidtuname(ttarinfo(R|tgrouptownerR~(s/usr/lib64/python2.7/shutil.pyt_set_uid_gid�s		sw|%sRB(RJt_ZLIB_SUPPORTEDt_BZ2_SUPPORTEDt
ValueErrortformatRR tdirnameRltinfoRKttarfileRvRuR,taddtclose(t	base_nametbase_dirtcompresstverbosetdry_runR�R�tloggerttar_compressiontcompress_exttarchive_nametarchive_dirR�R�ttar((R|R�R�R~s/usr/lib64/python2.7/shutil.pyt
_make_tarballis8					cCs�|rd}nd}d|||g}|dk	rL|jdj|��n|rVdSddl}y|j|�Wn!|jk
r�td|�nXdS(Ns-rs-rqtzipt i����skunable to create zip file '%s': could neither import the 'zipfile' module nor find a standalone zip utility(RJR�R>t
subprocesst
check_calltCalledProcessErrorR(R�tzip_filenameR�R�R�t
zipoptionstcmdR�((s/usr/lib64/python2.7/shutil.pyt_call_external_zip�s	c
Csn|d}tjj|�}|rmtjj|�rm|dk	rT|jd|�n|smtj|�qmnyddl}ddl}Wnt	k
r�d}nX|dkr�t
|||||�n�|dk	r�|jd||�n|sj|j|dd|j��Z}	tjj
|�}
|
tjkra|	j|
|
�|dk	ra|jd|
�qanx�tj|�D]�\}}}
xdt|�D]V}tjj
tjj||��}
|	j|
|
�|dk	r�|jd|
�q�q�Wxs|
D]k}tjj
tjj||��}
tjj|
�r�|	j|
|
�|dk	rY|jd|
�qYq�q�WqqWWdQXn|S(	smCreate a zip file from all the files under 'base_dir'.

    The output zip file will be named 'base_name' + ".zip".  Uses either the
    "zipfile" Python module (if available) or the InfoZIP "zip" utility
    (if installed and found on the default search path).  If neither tool is
    available, raises ExecError.  Returns the name of the output zip
    file.
    s.zipscreating %si����Ns#creating '%s' and adding '%s' to ittwtcompressionsadding '%s'(RR R�RlRJR�RKtzlibtzipfiletImportErrorR�tZipFiletZIP_DEFLATEDtnormpathtcurdirRtwalktsortedR>tisfile(R�R�R�R�R�R�R�R�R�tzfR tdirpathtdirnamest	filenamesRX((s/usr/lib64/python2.7/shutil.pyt
_make_zipfile�sL	


	
!
!'R�suncompressed tar fileR�sZIP fileR�sgzip'ed tar-filetgztarRysbzip2'ed tar-filetbztarcCs=gtj�D]\}}||df^q
}|j�|S(s�Returns a list of supported formats for archiving and unarchiving.

    Each element of the returned sequence is a tuple (name, description)
    i(t_ARCHIVE_FORMATStitemstsort(RXtregistrytformats((s/usr/lib64/python2.7/shutil.pyRs,
RfcCs�|dkrg}nt|tj�s:td|��nt|ttf�s^td��nxE|D]=}t|ttf�s�t|�dkretd��qeqeW|||ft|<dS(suRegisters an archive format.

    name is the name of the format. function is the callable that will be
    used to create archives. If provided, extra_args is a sequence of
    (name, value) tuples that will be passed as arguments to the callable.
    description can be provided to describe the format, and will be returned
    by the get_archive_formats() function.
    sThe %s object is not callables!extra_args needs to be a sequenceis+extra_args elements are : (arg_name, value)N(	RJRTtcollectionstCallablet	TypeErrorttupletlisttlenR�(RXtfunctiont
extra_argstdescriptiontelement((s/usr/lib64/python2.7/shutil.pyRs		
(cCst|=dS(N(R�(RX((s/usr/lib64/python2.7/shutil.pyR#sc	Cshtj�}	|d
k	rb|d
k	r7|jd|�ntjj|�}|sbtj|�qbn|d
krztj}ni|d6|d6}
yt|}Wnt	k
r�t
d|�nX|d}x"|dD]\}
}||
|
<q�W|dkr||
d<||
d	<nz||||
�}Wd
|d
k	rc|d
k	rS|jd|	�ntj|	�nX|S(sCreate an archive file (eg. zip or tar).

    'base_name' is the name of the file to create, minus any format-specific
    extension; 'format' is the archive format: one of "zip", "tar", "gztar",
    or "bztar".  Or any other registered format.

    'root_dir' is a directory that will be the root directory of the
    archive; ie. we typically chdir into 'root_dir' before creating the
    archive.  'base_dir' is the directory where we start archiving from;
    ie. 'base_dir' will be the common prefix of all files and
    directories in the archive.  'root_dir' and 'base_dir' both default
    to the current directory.  Returns the name of the archive file.

    'owner' and 'group' are used when creating a tar archive. By default,
    uses the current owner and group.
    schanging into '%s'R�R�sunknown archive format '%s'iiR�R�R�Nschanging back to '%s'(RtgetcwdRJtdebugR RtchdirR�R�RsR�(R�R�troot_dirR�R�R�R�R�R�tsave_cwdtkwargstformat_infotfunctargtvaltfilename((s/usr/lib64/python2.7/shutil.pyR&s6



i@(R�N(R�Rw(R�Ry(3RRR]R)tos.pathRRAR�R9R�RnR�R�R"RzR�tpwdRRJtgrpRt__all__RPRR
RRSt	NameErrorRR&RRRRRRR	RRjR
RmRuRvR�R�R�R�RRRR(((s/usr/lib64/python2.7/shutil.pyt<module>s�









		

							A1		(					?	8			
		


📤 Upload File


📁 Create Folder