This poster was presented at the Annual LCT Meeting, University of Malta, June 2023 and is based on the work by Wang et al. Large Language Models(LLM’s) store linguistic and relational information. The knowledge gained by LLM’s can be exploited and extracted to perform partially unsupervised open information extraction. Extracting n-ary representations from text in an unsupervised manner helps in many tasks, such as building knowledge bases and knowledge graph construction.