Skip to Main Content

Artificial Intelligence

NEW

Transparency & Disclosure

Transparency & Disclosure for Students

At the University of Florida, students are encouraged to use AI tools responsibly and transparently to maintain academic integrity. When utilizing AI for assignments, research, or any academic work, students should clearly disclose their use of AI technologies. This includes specifying which AI tools were used and how they contributed to the final submission. Additionally, students should adhere to any specific guidelines provided by their instructors or departments regarding AI use.

Transparency & Disclosure for Research & Publication

 

UF Guidance

Using AI in Research
  • Data Quality: Researchers should ensure that the data used to train or validate AI models is accurate, relevant, and free from bias.

  • Model Validation: Researchers should validate their AI models using appropriate metrics and techniques, such as cross-validation and robustness testing.
  • Model Interpretability: Researchers should prioritize model interpretability, ensuring that the results of their AI models are understandable and interpretable.
  • Dependency on AI: Researchers should avoid over-reliance on AI models and consider the limitations and potential biases of these models.
  • Documentation: Researchers should document their AI methods and models, including data sources, hyperparameters, and model performance metrics.

Research on AI

  • Responsible Innovation: Researchers should ensure that their AI research is conducted in a responsible and transparent manner, with consideration for the potential risks and benefits of their work.
  • Informed Consent: Researchers should obtain informed consent from participants before collecting data or using AI systems that involve human subjects.
  • Data Protection: Researchers should ensure that all data collected or used in their AI research is protected in accordance with UF’s Data Classification Policy and relevant regulations, such as HIPPA, FERPA, and export control protected information.
  • Algorithmic Bias: Researchers should be aware of the potential for algorithmic bias in their AI systems and take steps to mitigate it, such as using diverse and representative data sets. The research methodology should include validation and verifications steps to ensure that there is not bias in input data and output from the AI technology.
  • Transparency and Explainability: Researchers should prioritize transparency and explainability in their AI systems, ensuring that the decision-making processes are understandable and interpretable.

Additional Guidance:

University of Florida Home Page

This page uses Google Analytics - (Google Privacy Policy)

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.