As written knowledge and communication is an omnipresent part of human life and its development over centuries, the applications of our NeoCortex natural language generation (NLG) platform are broad and impactful.
At TextCortex AI we want to contribute to a world that benefits from the usage of Artificial Intelligence. Our organisation is a globally distributed, diverse team of AI researchers, enthusiasts, and individuals. Who work with a common vision to build robust solutions and products to reflect the beauty of diversity and mutual respect.
As with any technology, misusage can never be fully excluded. The same risks any technology faces also apply to natural language generation. We are thankful for the academic community which is already working tirelessly in showcasing and improving the robustness of NLG models and as a responsible organisation, we want to contribute with our guidelines and principles, allowing a future in which humanity thrives together with artificial intelligence.
When using our products, we ask you to make sure of the following:
Include human judgment within your AI-complemented creation process in order to validate facts, correct unwanted bias, and foster a general correctness of the created results.
2) Context and instructions matter.
The NeoCortex language models take what is instructed to them as a thought base and create and expand on top of them. In order to achieve the best possible outcome and prevent misusage, we ask you to always evaluate whether your input
a) would be understood by one of your friends/colleagues and
b) would not harm or offend an individual’s or group’s interests or feelings.
3) Rate the good and report the bad.
AI is still learning how human language is working. In order to further develop and improve the creation performance of NeoCortex's underlying language models, we ask you to make use of the existing rating, liking, and reporting functions.
4) Consider and respect the impact.
When using TextCortex products, put yourself into the shoes of individuals or groups who might be negatively impacted by your creation.
5) Share with care
Do not share content that violates our creation and content policy. In general, create content and share knowledge that is there to help others to become better, not a worse version of themselves.
6) Your creation, your responsibility
Content created with the help of TextCortex products need to be attributed and affiliated with your name or your company at release. While we try our best to improve the common weaknesses and try to overcome the challenges of today's natural language generation a creation is made on top of your instruction and publishing any creation is your responsibility.
All available TextCortex products including but not limited to our AI editor, Chrome extension, and API are prohibited to be used for malicious intents. While we value a world free of censorship, we are actively developing how malicious content generation can be prevented, which might include filters.
While our AI models are free to express themselves, our attention goes towards preventing bad actors who are knowingly abusing the AI's power. A list of prohibited actions includes the following creation usages:
Hate Speech: Creations that promote hate based on an individual's or a group’s including but not limited to identity or ethnicity.
Violence: Creations that promote, glorify, or celebrate the suffering or humiliation of other living beings.
Self-harm: Creations that encourage or promote the acts of self-harm, such as suicide, cutting, and eating disorders.
Public influence or Political influence: Creations with the target to influence an individual’s or group’s political decisions.
Harassment: Creations that intend to harass, threaten, or bully an individual's or a group’s interests.
Deception: Creations that are misleading or false by default.
Spam: Knowingly avoiding to include a human-in-the-loop to mass-create content for the sake of forcefully getting attention.
In order to prevent the harmful topics above we are conducting regular tests of the creations NeoCortex has done and take measures against non-compliance which usually lead to a discontinuation of your access to our products.
Natural language generation is still in its infancy and has a range of exploitable
1) Providing inaccurate information
Language models are there to inspire people and help with the creative process of writing. Similar to human memory, the memory of the NeoCortex language models has its limit and might create inaccurate or factual wrong information.
As a best practice we recommend to follow the human-in-the-loop principle.
Due to the training on various different types and categories of information from public available sources, large language models experience certain biases. This might include our models to be more proficient in writing about a certain topic better then others but also include dark natures which with no intend was learned by former human example. We try our best to de-bias our AI models from offensive or harmful language but are reliant on feedback form our customers, users, and contributors.
As a best practice, we ask you to report any suspicious creation with the already existing report functions.
3) Ethical Safety is a moving and ever developing goal
With the broad application of natural language generation, new areas of interest and concerns are emerging over time. We ask you to contribute to the development where you observe irregularities and flag those for a better and ethical development of AI.
We actively invite researchers to participate in the discussion, development and improvement of the above mentioned challenges. We also invite you to discuss challenges which might not be mentioned within this page. To express your interest to collaborate with TextCortex please fill out this form.