top of page
IMG_2109.jpg

Controller

The development of a framework that can be integrated into any AI Model that will provide ethical/moral/virtue decision making, through time. Specifically focused on AI systems that are designed to be autonomous and independent where the Autonomous AI will be able to make decisions in a directional manner toward the notion of Good. Effectively being a realtime guidance system in it's daily moment by moment decisions.

User Stories

Overall Product Model

  • Demand: As the capabilities of AI become more advanced there will be an approach to human intelligence and reasoning that will mimic the human capability, with this approach the AI systems will be capable of doing "Bad" things that can hurt others and will be able to perform tasks without constraints or compliance requirements being met. The need for this type and kind of model will be critical.

​​

  • Inputs: Controller Models (Moral / Virtue / Ethical) and customized domain models.

​​

  • Outputs: Prevented actions and decisions that have negative consequences to humans.

​​

  • Model Testing: Internal scoring and monitoring of decision making, battery of tests to verify the models are working over time.

​​

  • Compliance: Periodic tests that are reported to it's command center, out of compliance trending requires retraining and evaluation.

​​

  • Market Value: The risks to humans are real where the more autonomous an AI becomes the more risks the human population will incur as the AI System will not know or "feel" that is doing wrong, that is doing "Bad". Hence the programmatic requirements for the AI system to know and move it's actions and decisions toward the place of "Good" is not a requirement but an imperative architectural need.

bottom of page