Generative Artificial Intelligence (AI) chatbots are gaining traction due to their informative and direct response. However, the increasing unease surrounding its response has ignited a controversial debate on morality and ethics.
Although ChatGPT provides quality responses to prompts, observers have noted that popular AI still lacks common sense and morality.
As concerns grow over the ‘immoral’ content generated by artificial intelligence (AI), Anthropic has embarked on a different path. Their groundbreaking AI chatbot, Claude, possesses a profound understanding of good and evil, all while minimising human intervention. Built upon a unique “constitution” inspired by the Universal Declaration of Human Rights, Claude adheres to ethical rules that prioritize responsible behaviour and align with established norms.
In his explanation to Wired, Jared Kaplan, a co-founder of Anthropic and former OpenAI consultant, clarified that Claude’s constitution represents a specific set of training parameters. Trainers utilize these parameters to shape the AI model, guiding its behaviour and actively discouraging problematic actions. This interpretation introduces a different perspective on how the model aligns with its constitution.
Read More: Wendy’s Workers Get a Digital Competitor; AI-Powered Chatbot Can Now Take Orders
Claude VS Other Artificial Intelligence
According to an Anthropic tweet, Claude can handle over 100,000 ‘tokens’ of information. This is larger than the current ChatGPT and Bard’s prompt token size. ChatGPT model 3.5 has a limit of 4096 characters, including input and response prompts. The latest ChatGPT 4 model expands its prompt size to 25,000 words. On the other hand, Google Bard can handle 4000 characters.
See Related: Google Bard, Chat GPT Killer; Is OpenAI’s ChatGPT Losing Power
In the world of AI, a “token” typically represents a piece of data, like a word or character, that is handled by the model as an individual unit.