Metadata-Version: 2.1
Name: wildgram
Version: 0.3.9
Summary: wildgram tokenizes and seperates tokens into ngrams of varying size based on the natural language breaks in the text.
Home-page: https://gitlab.com/gracekatherineturner/wildgram
Author: Grace Turner
Author-email: gracekatherineturner@gmail.com
License: UNKNOWN
Description: Wildgram tokenizes english text into "wild"-grams (tokens of varying word count)
        that match closely to the the natural pauses of conversation. I originally built
        it as the first step in an abstraction pipeline for medical language: since
        medical concepts tend to be phrases of varying lengths, bag-of-words or bigrams
        doesn't really cut it.
        
        Wildgram works by measuring the size of the noise (stopwords, punctuation, and
        whitespace) and breaks up the text against noise of a certain size
        (it varies slightly depending on the noise).
        
        Parameters:
        
        text
        Required: Yes
        Default: No
        What it is: the text you want to wildgram.
        
        stopwords
        Required: No
        Default: STOPWORDS list (importable, mostly based on NLTK's stop word list)
        What it is: a list of stop words that you want to mark as noise, that will act as breaks between tokens.
        Custom Override: a list of strings that you want to split on.
        
        topicwords
        Required: No
        Default: TOPICWORDS list (importable)
        What it is: a list of stop words that you want to mark as tokens because they have meaning, but often serve to break up larger pieces of text. Examples include numbers, negation words like "won't", etc. Includes numbers,
        negations, and that kind of thing. Words that start with a number and end with a non-space, non-digit string
        are split up, because the assumption is they are meaningfully distinct -- e.g. "123mg" -> "123", "mg".
        Custom Override: a list of strings that you want to split on. You can also store a mixed list of
        dictionaries and strings, dictionaries in the form {token: "text", tokenType: "custom type"}
        for example, by default any negation stop words (like "no") have a tokenType of "negation".
        If no tokenType is set, the type is "token".
        
        include1gram
        Required: No
        Default: True
        What it is: when set to true, wildgram will also return every individual word or token as well as any phrases it finds.
        Custom Override: Boolean (false). When set to false, wildgram will only return the phrases it finds, not 1grams as well.
        
        joinerwords
        Required: No
        Default: JOINERWORDS list (importable, words like "of")
        What it is: a list of stop words (must also be included in stop word list if overridden) that join two phrases together. Example: "shortness of breath" -> "shortness", "breath", "shortness of breath".
        Custom Override: a list of strings you want to join on. WORDS MUST BE IN STOPWORDS LIST FOR THIS TO WORK. The assumption is you wouldn't want a joiner word that is also a topic word.
        
        returnNoise
        Required: No
        Default: True
        What it is: when set to true, wildgram will also return each individual noise token it created to find the phrases.
        Custom Override: Boolean (false). When set to false, it will not return the noise tokens.
        
        
        includeParent
        Required: No
        Default: False
        Note: In the process of being deprecated, because I didn't find it to be useful in topic organizing.
        What it is: when set to true, wildgram will also return the "parent" of the token, in a pseudo-dependency tree.
        This tree is generated using a ranked list of the prior (in the text) styles of punctuation to approximate
        the relationships between tokens. Noise tokens act as branching nodes while normal tokens can only be leaf nodes,
        so in practice this is used to determine the "uncles" of the token. Examples of how this might be useful is
        linking list like elements under a larger heading or figuring out the unit of a number based on the context (which may not be on the same line). Since noise tokens are the branching nodes, returnNoise must be set to true if includeParent is true.
        Custom Override: Boolean (True). When set to True, it will not return the parent.
        
        
        Returns:
        a list of dictionaries, each dictionary in the form:
        ```python
        example = {
        "startIndex": 0,
        "endIndex", 5,
        "token": "hello",
        "tokenType": "token" # if noise, token type is "noise"
        "index": 0
        }
        ```
        The list is sorted in ascending (smallest->largest) order for the startIndex, then the endIndex.
        
        
        Example code:
        
        ```python
        from wildgram import wildgram
        ranges = wildgram("and was beautiful", returnNoise=False)
        
        #[{
        #"startIndex": 8,
        #"endIndex", 17,
        #"token": "beautiful",
        #"tokenType": "token",
        # "index": 0
        #}]
        
        from wildgram import wildgram
        ranges = wildgram("and was beautiful day")
        print(ranges)
        '''
        [{
          "startIndex": 0,
          "endIndex": 8,
          "token": "and was ",
          "tokenType": "noise",
          "index": 0
        },
        {
          "startIndex": 8,
          "endIndex": 17,
          "token": "beautiful",
          "tokenType": "token",
          "index": 1
        },
        {
          "startIndex": 8,
          "endIndex": 21,
          "token": "beautiful day",
          "tokenType": "token",
          "index": 2
        },
        {
          "startIndex": 17,
          "endIndex": 18,
          "token": " ",
          "tokenType": "noise",
          "index": 3
        },
        {
          "startIndex": 18,
          "endIndex": 21,
          "token": "day",
          "tokenType": "token",
          "index": 4
        }
        ]
        '''
        ```
        
        With versions >= 0.2.9, there is also the class WildRules. This applies a set of
        rules to the tokenized wildgram, making a basic rule based classifier. This shall
        be optimized in future versions for speed, etc. In later versions, it also allows you
        to specify given phrases nearby.
        example:
        ```python
        from wildgram import WildRules
        
        test= WildRules([{"topic": "TEST", "spans": ["testing", "test"], "spanType": "token", "nearby": [{"spanType": "token", "spans": ["1234"]}]}, {"topic": "Dosage", "spans": ["numeric"], "spanType": "tokenType"}])
        ret = test.apply("testing test 123")
        # note the topic for testing test is unknown, because it is missing 1234 in the general area
        [{'topic': 'unknown', 'token': 'testing test', 'startIndex': 0, 'endIndex': 12}, {'topic': 'Dosage', 'token': '123', 'startIndex': 13, 'endIndex': 16}]
        
        ret = test.apply("testing test 1234")
        ## returns the topic TEST, since 1234 is in the area
        [{'topic': 'TEST', 'token': 'testing test', 'startIndex': 0, 'endIndex': 12}, {'topic': 'Dosage', 'token': '1234', 'startIndex': 13, 'endIndex': 17}]
        
        ```
        
        
        That's all folks!
        
Platform: UNKNOWN
Classifier: Programming Language :: Python :: 3
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Requires-Python: >=3.6
Description-Content-Type: text/markdown
