AI Act News

AI Act Compliance Checker now live online

This excellent tool will tell you if your AI system will be compliant with the EU AI Act:

https://artificialintelligenceact.eu/assessment/eu-ai-act-compliance-checker/

Concerns Over New Loophole in AI Act

A coalition of 118 civil society organisations, including BEUC, European Digital Rights (EDRi), and Access Now, has appealed to EU lawmakers regarding what they consider a critical oversight in the AI Act. The group points to a loophole that potentially jeopardizes the entire legislation.

In its initial form, the European Commission’s draft categorically defined ‘high-risk’ AI systems using criteria laid out in Annex III. However, subsequent amendments by the Council and European Parliament have blurred these clear lines, enabling a subjective assessment by the developers themselves.  These included things like AI systems which controlled medical implants, where failure could result in death, and systems responsible for traffic control.

However, the Council and European Parliament have changed this to allow developers to decide for themselves if their system is a high-risk one.  This lets these developers decide whether they’re bound by the law.

To ensure a consistent, objective, and legally sound framework, the coalition has urged for a rejection of the changes made to Article 6. They advocate for a return to the Commission’s original risk-classification method for identifying ‘high-risk’ AI systems within the AI Act.

Academics Call for a Rigorous Rights Impact Assessment in AI Act

The Brussels Privacy Hub, in collaboration with over 110 prominent academics, has issued a public letter championing the need for a stringent fundamental rights impact assessment within the AI Act. Their recommendation aligns with the European Parliament’s draft of the Act, emphasizing the criticality of:

  1. Defining Clear Criteria: The Act should provide unequivocal parameters to assess the repercussions of AI on fundamental human rights.
  2. Promoting Transparency: Results from the impact assessment should be made publicly available in detailed, comprehensible summaries.
  3. Prioritizing End-user Participation: There’s a pressing need to incorporate feedback from affected end-users, with particular attention to those in vulnerable situations.
  4. Engaging Independent Authorities: The process should actively involve impartial public bodies in the assessment proceedings and ensure auditing mechanisms are in place.

 

Can AI do philosophy?

Can AI do philosophy? ChatGPT outlines what would be required to train AI in philosophy. The short answer is “not yet, maybe never”

Could you use AI to understand philosophy?

Philosophers have been asking me if we can use AI to help understand philosophy (there’s even grant money available).  For example, could you use AI to determine the degree to which the ideas of Hegel are present in Strawson?

I did a short assessment of what was required, then asked ChatGPT for it’s response:

Prompt

I am also becoming fairly certain ChatGPT does not have the capability we are talking about.  If the text in both sources is identical, even MS Word could compare them.  If it is a conceptual comparision, I don’t think ChatGPT has the knowledge.  It was trained to converse, but that’s all it was trained for.  What will need to happen now is for an AI to be trained in philosophy.  Which means taking every philosophical text we have, manually annotating every term, sentence and paragraph for what it is saying, what it means, what its philosophical significance is, who debates it (how, why and where).  It means agreeing the meaning of every philosophical term, or teaching the AI all the alternative interpretations in detail.  I can imagine the debates over just the annotation will repeat every philosophical debate we’ve ever had all over again.  So it will have to learn all the alternatives.  Maybe one day, but it will be a massive undertaking.  It may actually be impossible and raises a nice philosophical question – can we codify humanity’s understanding of philosophy at all, even catering for all the various interpretations?  Is it a fully definable field of knowledge or is a certain “fuzziness” an essential property of philosophy? Because an AI needs to understand it both as a human activity as well as a body of thought.

ChatGPT Response

The statements you provided contain a mix of accurate and speculative assertions. Let’s analyze them one by one:

  1. “If the text in both sources is identical, even MS Word could compare them.”

This statement is generally accurate. If the texts in two sources are identical, any text comparison tool, including MS Word, can detect the matching content. It’s a straightforward task of comparing character sequences.

  1. “If it is a conceptual comparison, I don’t think ChatGPT has the knowledge. It was trained to converse, but that’s all it was trained for.”

This statement is partially accurate. ChatGPT is a language model trained to generate human-like responses based on patterns in the data it was trained on. While it can provide information and engage in discussions about a wide range of topics, including philosophy, it does not possess an inherent understanding of concepts. It lacks personal experiences and consciousness, which limits its ability to grasp complex philosophical nuances in the same way humans do.

  1. “What will need to happen now is for an AI to be trained in philosophy.”

This statement is speculative but reflects a plausible idea. Developing an AI specifically trained in philosophy could potentially enhance its ability to engage in philosophical discussions and provide more nuanced insights. However, training an AI to understand philosophy comprehensively and accurately would be an immense undertaking, requiring extensive annotation and interpretation of philosophical texts.

  1. “It means agreeing on the meaning of every philosophical term or teaching the AI all the alternative interpretations in detail.”

This statement highlights a significant challenge in training an AI in philosophy. Philosophical terms often carry multiple interpretations and can be subject to ongoing debates. Reaching a consensus on the meaning of every term would be difficult, if not impossible. However, teaching an AI about the alternative interpretations could help it navigate the complexities of philosophical discourse and provide a broader perspective.

  1. “Can we codify humanity’s understanding of philosophy at all, even catering for all the various interpretations? Is it a fully definable field of knowledge, or is a certain ‘fuzziness’ an essential property of philosophy?”

These questions delve into the fundamental nature of philosophy itself. Philosophy often deals with abstract concepts, subjective interpretations, and unresolved debates. While efforts can be made to codify philosophical knowledge, the inherent complexity and diversity of perspectives in philosophy may make it challenging to fully capture and define. The openness to different interpretations and the existence of philosophical debates contribute to the ongoing nature of philosophical inquiry.

In summary, while an AI trained in philosophy could potentially offer insights and engage in discussions on the subject, capturing the full depth and complexity of philosophy poses significant challenges. The nature of philosophical concepts and ongoing debates necessitates a nuanced understanding that may extend beyond the capabilities of current AI systems.

Technical Details

The following is my conversation with ChatGPT to determine what would be required to teach it philosophy:

What training would be required for you to identify which philosophical  books contain the same ideas or arguments or positions?

As an AI language model, my abilities are based on the patterns and correlations I’ve learned from the large amount of text data I’ve been trained on, which includes a diverse range of sources. To identify similarities in philosophical ideas, arguments, or positions from different books, I don’t need additional “training” in the conventional human sense.

However, my effectiveness at this task would rely heavily on the philosophical content in the data I was trained on. Given my current training data (up until September 2021), I can provide a broad comparison and overlap between various philosophical concepts and arguments.

Remember, I don’t “understand” concepts in the way a human does, but I generate responses based on patterns I’ve learned. Here’s an example:

Let’s consider the philosophical positions in “Meditations” by Marcus Aurelius and “Beyond Good and Evil” by Friedrich Nietzsche. Both books touch on ethics and morality. Aurelius, a Stoic, advocates for a life in accordance with nature and virtue, emphasizing discipline, duty, and acceptance. Nietzsche, on the other hand, critiques traditional moral values and encourages the reevaluation of good and evil, promoting the idea of individual and cultural health as metrics for morality. So, while both address morality, their approaches and conclusions differ greatly.

Please note that this is a simplified comparison and these works have many complexities. For deep, detailed comparisons, or to cover philosophical works and ideas that have been published after my last training cut-off in September 2021, you would need a more recent AI model or a human expert.

In the future, if there are advancements that allow AI models to ingest and learn from specific texts in a targeted manner, training an AI on a specific set of philosophical books could potentially enhance its ability to recognize and analyze the commonalities and differences between those specific texts. However, as of my knowledge cutoff in 2021, AI models do not have the ability to read, learn from, or understand new material in a conscious or intentional manner after their training has been completed. They can only generate responses based on the data they were trained on.

Would training you to recognize and analyze the commonalities and differences between two specific texts require human annotation of those texts?

As of my last training cutoff in September 2021, training a model like me to recognize and analyze the commonalities and differences between two specific texts would indeed benefit from human annotation. This is because supervised learning, which involves providing the model with both inputs (textual data) and outputs (correct responses or annotations), often yields the best results.

Such human annotations might highlight the key arguments, positions, themes, or stylistic elements of the texts, identify their similarities and differences, and provide interpretative context. The model could then learn to recognize and generate similar annotations when presented with similar textual data. This kind of fine-tuning is often conducted on top of the base model, which has been pre-trained on a large and diverse corpus of text.

It’s important to note that while this process could improve the model’s ability to analyze the specific types of text it was trained on, it would not enable the model to truly understand the content of the text in the way that a human would. The model would simply be identifying and reproducing patterns in the data, not forming its own understanding or interpretation.

Keep in mind that as of my last update, training an AI model requires a significant amount of computational resources and technical expertise, and is usually performed by teams of experts over a period of weeks or months. It’s not currently something that can be done by end users to customize their AI models.

Can you provide examples of the annotations which would be useful for such a task?

Yes, here are examples of types of annotations that could be useful for training a model to recognize and analyze the commonalities and differences between philosophical texts:

1. **Identifying Key Concepts and Arguments:** Label each argument or key concept in the text, including the main philosophical ideas, positions, or claims being made.

Example Annotation: [Argument Start – Argument End] “Author advocates for moral relativism.”

2. **Identifying Evidence or Reasoning:** Label evidence or reasoning used to support each argument or claim.

Example Annotation: [Evidence Start – Evidence End] “Author uses the cultural diversity argument to support moral relativism.”

3. **Identifying Counterarguments or Critiques:** If the text contains any counterarguments or critiques of other ideas, label these.

Example Annotation: [Counterargument Start – Counterargument End] “Author critiques the concept of objective morality.”

4. **Identifying Similarities and Differences with Other Works:** If the text makes explicit reference to other philosophical works or ideas (either agreeing with or disputing them), these references should be labeled.

Example Annotation: [Reference Start – Reference End] “This argument aligns with the ideas of Immanuel Kant.” or “This argument refutes the ideas of Friedrich Nietzsche.”

5. **Identifying Underlying Philosophical Tradition or School:** If the text is part of or is influenced by a specific philosophical tradition or school, like existentialism, rationalism, empiricism, etc., label these.

Example Annotation: [Text Start – Text End] “This text is representative of existentialist philosophy.”

6. **Sentiment Analysis:** Label the sentiment or tone of certain passages, as this might help identify the author’s attitudes toward certain concepts or arguments.

Example Annotation: [Passage Start – Passage End] “Sentiment: Negative. The author is critical of the notion of absolute morality.”

These annotations are hypothetical and would need to be designed and implemented by a team of experts in machine learning and philosophy to be effective.

Let us assume I have the required expertise in machine learning and philosophy. If I use appropriate API’s, is it possible to train you in these tasks?

As an AI developed by OpenAI, I don’t have the capability to be directly trained by end-users. My responses are generated based on a language model trained on a diverse range of internet text. However, there are ways you could potentially use GPT-4 or similar models for this task with the right tools and expertise.

As of my knowledge cut-off in September 2021, OpenAI allows fine-tuning of base models, which is a form of training where the model is further trained on a specific dataset to customize its outputs. The process of fine-tuning requires high levels of technical expertise in machine learning, as well as substantial computational resources.

If you have that expertise, you could create a fine-tuning dataset with the type of annotations I mentioned in the previous message, and use that dataset to fine-tune a copy of the base model. You would also need access to a high-performance computing infrastructure to carry out the fine-tuning process. You could then use APIs to interact with the fine-tuned model and get the customized outputs you’re looking for.

However, keep in mind that as of my last update, OpenAI had certain restrictions and usage policies around fine-tuning and the use of fine-tuned models. You would need to consult the most recent OpenAI policies and possibly seek permission from OpenAI before embarking on this process.

Also, remember that even with fine-tuning, an AI model will not truly understand the texts or the philosophical concepts in the same way a human would. It would simply be learning to generate responses based on patterns in the data. It’s always important to verify the outputs of the AI and consult a human expert when necessary.

List as many annotations as you can for training a LLM to recognize and analyze the commonalities and differences between philosophical texts?

Here are a variety of annotations that could potentially be used for training a language model like GPT-4 to analyze the commonalities and differences between philosophical texts. Note that these examples assume a good understanding of philosophy and machine learning. This process would require a large number of examples for each annotation to be effective:

1. **Argument Identification:** Label specific arguments made in the text.

Example Annotation: [Argument Start – Argument End] “Type: Argument. Content: Existence precedes essence.”

2. **Evidence or Reasoning Identification:** Label the evidence or reasoning supporting each argument.

Example Annotation: [Evidence Start – Evidence End] “Type: Evidence. Content: Anecdote about a worker choosing his career.”

3. **Philosophical School or Tradition:** Label the philosophical tradition or school the argument belongs to.

Example Annotation: [Text Start – Text End] “Type: Philosophy School. Content: Existentialism.”

4. **Counterargument Identification:** Label specific counterarguments or critiques of arguments made.

Example Annotation: [Counterargument Start – Counterargument End] “Type: Counterargument. Content: Critique of determinism.”

5. **Author Attribution:** Label the author of the argument or idea.

Example Annotation: [Text Start – Text End] “Type: Author. Content: Jean-Paul Sartre.”

6. **Historical Context:** Label the historical context of the argument or idea.

Example Annotation: [Text Start – Text End] “Type: Historical Context. Content: Post-World War II France.”

7. **Theme Identification:** Label major themes covered in the text.

Example Annotation: [Theme Start – Theme End] “Type: Theme. Content: Freedom and Responsibility.”

8. **Concept Analysis:** Label how the text analyses or explains a philosophical concept.

Example Annotation: [Concept Analysis Start – Concept Analysis End] “Type: Concept Analysis. Content: Exploration of ‘bad faith’.”

9. **Cross-Text Reference:** Label explicit references to other philosophical works or philosophers.

Example Annotation: [Cross-Text Reference Start – Cross-Text Reference End] “Type: Cross-Text Reference. Content: Reference to Nietzsche’s concept of the ‘Overman’.”

10. **Sentiment Analysis:** Label the sentiment or tone of certain passages.

Example Annotation: [Passage Start – Passage End] “Type: Sentiment. Content: Critical.”

11. **Argument Similarity:** Label arguments that are similar to arguments from other texts.

Example Annotation: [Argument Start – Argument End] “Type: Similar Argument. Content: Similar to Kant’s moral philosophy.”

12. **Ethical Stance:** Label the ethical position supported or critiqued in the text.

Example Annotation: [Stance Start – Stance End] “Type: Ethical Stance. Content: Supports moral relativism.”

Remember, these are merely illustrative examples. Actual annotation schemes would need to be carefully developed by experts in both philosophy and machine learning, taking into account the specificities of the texts and the capabilities and limitations of the language model being used. Additionally, the application of these annotations and the training of the model would need to follow OpenAI’s usage policies and guidelines.

How many different philosophical texts would a LLM need to use to make a useful learning set for the ability to compare philosophical texts and identify shared ideas?

Given the complexity of the task, and considering the usual sizes of datasets for similar complex tasks in machine learning, you might consider starting with thousands of philosophical texts for your training set. A reasonable range could be between 5,000 to 20,000 philosophical texts, each annotated appropriately, depending on the scope of the philosophical traditions and ideas you aim to cover.

Remember, though, it’s not just about quantity but also about the quality and diversity of the data. You would want a balanced representation of different philosophical traditions, topics, authors, and time periods to make sure the model can handle a wide range of philosophical comparisons. Also, the quality of the annotations is vital – the more accurately and thoroughly the texts are annotated, the better the model will likely perform.

In addition to these texts, you’ll likely also need a validation set (a subset of data used to provide an unbiased evaluation of a model fit on the training dataset and tune model hyperparameters) and a test set (a dataset used to provide an unbiased evaluation of a final model fit on the training dataset). These should be separate from your original training set and could each be 10-20% of the size of the training set.

Lastly, remember these are rough estimates and the exact number could vary widely based on the specifics of the task and the capabilities of the model being used. The process of machine learning involves a lot of experimentation and iteration, so you would likely need to adjust your approach as you go along based on the results you’re getting.

What is the average number of words in a philosophical book published in the last 100 years?

The average length of a book can vary significantly depending on the author, the topic, the intended audience, and other factors. However, as a rough estimate, most books tend to fall within the range of 50,000 to 100,000 words.

Specifically for philosophical texts, they might lean towards the upper end of that range, or even exceed it, considering the complexity and depth of the subject matter.

Please note that this is a broad generalization. There are philosophical works that are significantly shorter, such as essay collections or introductory guides, and others that are much longer, such as comprehensive treatises or compilations of a philosopher’s entire work.

In any case, for the purposes of training a machine learning model, the total word count might be less relevant than the number of unique arguments, concepts, or ideas in the text, which could vary widely from one philosophical text to another regardless of their lengths.

Provide a recommended computer configuration for such a task.

The time it would take to train a large language model (LLM) like GPT-4 on a dataset of 10,000 annotated philosophical texts to a sufficient level can vary widely depending on a number of factors, including the size and complexity of the model, the quality and complexity of the annotations, the specific machine learning techniques used, the quality of the hardware, and more.

For context, training the original GPT-3 model, which has 175 billion parameters, reportedly took several weeks on a cluster of high-performance GPUs. Your task would be less computationally intensive than this, since you would be fine-tuning the model on a specific dataset rather than training it from scratch, but it would still likely take several days to weeks on the kind of hardware described in the previous message.

Keep in mind that this is a rough estimate, and the actual time could be shorter or longer depending on the factors mentioned above. Also note that training the model is only part of the process. You would also need to spend time on preprocessing your data, developing and testing your model, analyzing the results, and possibly iterating on your approach based on those results. All of these steps can also be time-consuming.

It’s also worth noting that, as of my knowledge cut-off in September 2021, training a model like GPT-4 would be at the cutting edge of machine learning research and would likely require a high level of expertise and resources. Be prepared for unexpected challenges and complexities along the way.

 

chatGPT writes fake biography

The following pure fiction was written by chatGPT.  The person described does not exist.   The detailed biography was created by chatGPT.  The “photograph” of the person was created by DALL-E.

PROMPT: “Professor John Hickrotten-Faber is a famous chemist who has invented several CRISPR techniques. Write a 500 word biography about his influential work”

Fictional person "Hickrotten-Faber" - created by DALL-EJohn Hickrotten-Faber is a renowned chemist and molecular biologist known for his pioneering work in the field of CRISPR gene editing. He received his undergraduate degree in chemistry from Cambridge University and went on to earn his Ph.D. in molecular biology from the University of California, Berkeley.

After completing his Ph.D., Hickrotten-Faber began working at the University of California, San Francisco, where he and his team discovered the CRISPR-Cas9 system, a powerful tool for editing genomes. This discovery revolutionized the field of genetic engineering by making it much simpler and more precise to make specific changes to the DNA of living organisms.

Hickrotten-Faber’s work on CRISPR-Cas9 led to the development of several new techniques for editing genomes, including base editing and prime editing. These methods greatly expanded the capabilities of CRISPR gene editing, making it possible to make more complex changes to DNA, including the correction of genetic diseases.

In addition to his work on CRISPR, Hickrotten-Faber is also known for his research on the regulation of gene expression, the process by which genes are turned on and off in a cell. He and his team discovered several key regulatory elements in the genome that control the expression of genes and are involved in the development of cancer.

Hickrotten-Faber’s contributions to the field of genetics have earned him numerous awards and honors, including the Breakthrough Prize in Life Sciences, the Japan Prize, and the Royal Society’s Royal Medal. He is also a member of several prestigious scientific societies, including the National Academy of Sciences and the Royal Society.

Overall, John Hickrotten-Faber’s work on CRISPR and gene regulation has had a profound impact on the field of genetics and has paved the way for new treatments and therapies for a wide range of genetic diseases. His research continues to inspire new discoveries and advances in the field, and his contributions will undoubtedly have a lasting impact on the future of medicine and biotechnology.

Ethics by Design

Technical Background

Blogging by AI

The following was written by OpenAI, using the AI ContentWriter plugin for WordPress.

The prompt given was merely the title “Ethics by Design” Writing Style was set to “Technical” and Tone was set to “Formal”.  Most headings were generated by OpenAI by I needed to give it a little more focus so I added “The Values by Design Movement”, “EU Requirements for Ethics by Design” and “Ethics by Design in AI Development”. OpenAI was also set to write introductory and concluding paragraphs.

As you can see, the headings it created were fairly generic, talking about a high level concept more than anything specific. What we learn from this is that, as OpenAI reflects the overall content of human expression (as far as its learning set allows), there is a possibility OpenAI output will naturally gravitate to majority opinions.

Artwork was created by DALL E, using the prompt “a friendly robot lecturing at a university photorealistic”

Introduction

Ethics by Design is a concept that seeks to ensure that ethical considerations are part of the design process for products, services, and systems. This approach seeks to create ethical products, services, and systems that are mindful of how their design, use, and impact will affect individuals, the environment, and society. The goal of Ethics by Design is to embed ethical considerations into the design process to prevent unethical outcomes. This includes considering how a product, service, or system will be used and how it will impact individuals, the environment, and society as a whole. By implementing ethical considerations into the design process, we can create products and services that are both beneficial and responsible.

An Overview of Ethics by Design

Ethics by Design is a concept created to help organizations better align their operations with ethical standards and principles. It is based on the belief that ethical considerations should be integrated into the design of products and services from the very beginning, rather than as an afterthought. This approach ensures that the organization’s entire operation is built on a foundation of ethical values. The key components of Ethics by Design include the identification of core ethical values; the adoption of a code of ethics; the development of principles for ethical decision making; and the implementation of procedures and processes that support ethical behavior. This includes things such as making sure that customer information is secure, avoiding conflicts of interest, and ensuring that all employees understand and adhere to the organization’s ethics policies. The ultimate goal of Ethics by Design is to create an organizational culture that is committed to doing the right thing, even when it is not required by law. It is a proactive approach to ethics that seeks to instill a sense of responsibility in employees and promote ethical decision making. By explicitly defining the organization’s ethical values and expectations, it can help to ensure that all employees understand the importance of ethical behavior and strive to make the right decisions.

The Values by Design Movement

The Values by Design Movement is an effort to bring together people from diverse backgrounds to promote the idea of designing according to values. The movement seeks to create a culture where values-driven decision-making is the norm. This is achieved by fostering conversations among individuals and groups, educating on the importance of values-design, and connecting members to relevant resources. The Values by Design Movement encourages people to make decisions based on their core values, rather than just on cost or convenience. This approach is advocated for in almost all areas of life, including business, technology, education, religion, and politics. By focusing on values-driven design, individuals and organizations can create more meaningful and sustainable products, services, and systems. The Values by Design Movement also seeks to promote collaboration and innovation across different industries and sectors. By connecting people from different backgrounds, the movement encourages the exchange of ideas and resources, which can lead to the development of new solutions to problems. The Values by Design Movement is a global effort and is gaining traction in many countries. It is supported by various organizations, including the Values Alliance and the Values by Design Institute. These organizations offer educational resources and host events to help people learn about the importance of values-based design. The Values by Design Movement is an important step forward in creating a more values-driven world. By promoting conversations and collaborations, it is helping to create a culture where values are respected and valued.

Exploring the Benefits of Ethical Design Principles in Technology

Ethical design principles are essential for the development of any technology, and their application can have far-reaching implications. As technology advances, the need for ethical design principles to be taken into consideration is becoming increasingly important. This article will explore the various benefits of incorporating ethical design principles into the development of technology. The first benefit is the promotion of user safety. Ethical design principles help to ensure that technology is designed and developed in a way that protects users from any potential harm or abuse. This includes privacy considerations, such as ensuring that user data is not shared without their consent, as well as security considerations to protect users from malicious software or hacking attempts. The second benefit is the promotion of trust. Ethical design principles ensure that technology is developed with the users’ interests in mind, and that the technology is being developed in a responsible way. By building trust between users and the technology, developers can create products that people are more likely to use and engage with. The third benefit is the promotion of social responsibility. Ethical design principles help to ensure that technology does not have a negative impact on society. This includes considerations about how the technology will be used, as well as how it will affect the environment and other people. For example, when developing autonomous vehicles, ethical design principles may be used to ensure that the vehicles are designed to be as safe and responsible as possible. Overall, ethical design principles provide a number of benefits when it comes to the development of technology. These principles help to promote user safety, trust, and social responsibility. As technology continues to advance, the importance of ethical design principles will only increase, and it is essential that developers and companies take these principles into account when developing new technology.

EU Requirements for Ethics by Design

Ethics by Design is an approach to development that ensures that ethical considerations are taken into account when designing new products and services. This approach is now a mandatory requirement within the European Union. In order to meet this requirement, companies must ensure that their products and services are designed with ethical considerations in mind. This includes taking into account issues such as privacy, security, data protection, and consumer rights. Companies must also demonstrate that they have taken steps to ensure that their products and services are designed in a way that is compliant with relevant legislation and regulations. The European Union has set out a number of specific requirements for companies to ensure that their products and services are designed with ethics in mind.

These include:

  • Developing a clear and comprehensive policy on ethical considerations in product and service design.
  • Ensuring that processes and procedures are in place to ensure that ethical considerations are taken into account during the design and development process.
  • Establishing a system for monitoring and assessing risk associated with ethical considerations in product and service design.
  • Developing measures to ensure that ethical considerations are taken into account when making changes to existing products and services.
  • Ensuring that all staff involved in product and service design are aware of the ethical considerations that must be taken into account.
  • Establishing processes for reporting any issues related to ethical considerations in product and service design.

By adhering to these requirements, companies can ensure that their products and services are designed in an ethical and responsible manner. This will help to protect the rights of their customers, safeguard their reputation, and protect their business from potential legal challenges.

Ethics by Design in AI Development

AI development is an increasingly important tool for businesses and organisations, allowing for rapid automation and optimisation of processes. As the technology continues to evolve, there is a need for ethical considerations to be taken into account during its design and development. This article will explore how AI developers can incorporate ethical considerations into their development processes and how these considerations can benefit the end user. When developing AI, it is important to ensure that the algorithms are designed to benefit human users, rather than exploiting them.

This can be achieved through three key steps.

Firstly, developers should ensure that the algorithms are designed to be transparent and accountable. This requires that the developers include mechanisms for users to understand how the algorithm works and how it is impacting their decisions.

Secondly, developers should ensure that the algorithms are designed to be fair and equitable. This means that the algorithms should be designed to minimise bias, such as by avoiding data sets that may contain bias or by accounting for different demographics in the data set.

Finally, developers should ensure that the algorithms are designed to be secure and reliable. This means that the algorithms should be designed to minimise risks to users, such as by protecting user data and preventing any malicious use of the algorithm.

By incorporating ethical considerations into the development of AI, developers can ensure that the algorithms are designed to benefit the end user. This can result in increased trust in the technology, as well as increased user adoption. Additionally, it can help ensure that the algorithms are compliant with legal requirements, such as data protection regulations. Finally, ethical considerations in AI development can help protect users from potential harms, such as malicious use of the algorithm or biased decision-making.

In conclusion, ethical considerations should be an integral part of any AI development process. By incorporating these considerations into the design and development of algorithms, developers can ensure that their AI solutions are designed to benefit the end user, rather than exploiting them. This can lead to increased trust in the technology, as well as increased user adoption and compliance with legal requirements.

The Responsibility of Designers to Uphold Ethical Standards in the Digital Age

In the digital age, designers have a responsibility to uphold ethical standards. As technology has become increasingly pervasive and integrated into all aspects of society, designers must ensure they are aware of, and adhere to, the ethical principles that govern the digital space. Designers must recognize that their work not only has an impact on the people who will use their designs, but also on broader social, economic, environmental, and political systems. Therefore, it is essential that designers understand their ethical obligations and the potential consequences of their work. For example, designers must consider the implications of the data they collect and store, and how it could be used by themselves or others. They need to ensure that user data is securely stored and is not used in ways not authorized by the user.

Furthermore, designers must consider the potential privacy implications of their designs. They must ensure that users are provided with clear and understandable information about how their data is collected and used, as well as the ability to provide informed consent. Designers must also be aware of the potential for their designs to discriminate against certain groups. They must ensure that their designs are inclusive and accessible to all users, including those with disabilities, and do not inadvertently contribute to existing inequalities.

In addition, designers must ensure that their designs are not used for malicious purposes or to spread misinformation or inappropriate content. They should also be aware that their designs may be used to manipulate users, such as through targeted advertising or persuasive design practices, and must take steps to protect users from such manipulation. By taking the time to consider these ethical implications, designers can ensure they are meeting their responsibilities in the digital age. They can also be confident that their designs are helping to create a safe and equitable digital environment.

How Artificial Intelligence Is Influencing Ethical Design

As technology continues to rapidly evolve, Artificial Intelligence (AI) is becoming a growing influence in many aspects of our lives. AI has revolutionized the way we interact with technology, and it has the potential to shape the way we design products, services, and experiences. AI-driven design is becoming increasingly important, and it is crucial that designers take into account ethical considerations when creating AI-driven products. Ethical design, or design that takes into account the ethical implications of a product, is becoming increasingly important in the AI space. This means that, when designing AI-driven products, considerations must be made to ensure that the product does not cause any harm or infringe on any rights of its users. Ethical design should also strive to ensure that the product does not cause any unintended harm to those who interact with it.

AI has the potential to dramatically increase the effectiveness of ethical design. AI can be used to automate many of the processes involved in ethical design, such as data analysis and risk assessment. AI can also be used to gather feedback from users, allowing designers to quickly identify any potential ethical issues and address them before the product is released. In addition, AI can be used to create more personalized experiences for users. AI-driven products can collect data on user preferences and use this data to create experiences tailored to each individual user. This allows designers to create products that not only meet user needs but also adhere to ethical standards. Finally, AI can help designers identify potential ethical issues before they become a problem. AI can be used to monitor user behavior, allowing designers to spot any potential ethical issues before they become a serious concern. This can help designers create products that are ethical and user friendly. Overall, Artificial Intelligence is becoming an increasingly important influence in ethical design. AI can help automate many of the processes involved in ethical design and can be used to create more personalized experiences for users. Additionally, AI can help designers identify potential ethical issues before they become a problem. As AI continues to evolve, its influence on ethical design will only become more significant.

The Role of Ethical Design in Developing Safe and Secure Digital Experiences

The digital landscape is rapidly evolving, and with it, the ethical design of digital experiences has become increasingly important. As technology continues to advance, ethical design must also keep up in order to ensure that digital experiences are safe and secure. Ethical design is the practice of creating digital products and services that consider the moral implications of their design and use. Ethical design is used to create digital experiences that are safe, secure, and respectful of user privacy and autonomy. This type of design is important for creating digital experiences that are enjoyable and trustworthy for users.

When designing a digital experience, ethical considerations must be taken into account. This includes ensuring that users’ data is secure and that digital tools do not exploit users for the benefit of the company or organization. It also involves protecting users’ privacy and allowing them to control their own data. Additionally, ethical design should strive to create digital experiences that are accessible and equitable for all users. When it comes to security, ethical design is essential for developing safe and secure digital experiences. This includes designing systems that protect users’ data and make sure it is not accessed by unauthorized parties. It also includes designing applications that are secure against malicious attacks and are not vulnerable to exploitation.

Ethical design should also be used to create digital experiences that respect and protect user privacy. This includes making sure that users are informed about how their data is being used and giving them control over how it is shared. Additionally, it involves designing systems that protect users’ data from being used for anything other than what it was intended for. Finally, ethical design should be used to create digital experiences that are accessible and equitable for all users. This includes designing systems that are usable by all users, regardless of their ability or background. It also involves designing experiences that are respectful of diversity and inclusivity, so that all users can enjoy and benefit from the digital experience. Overall, ethical design is essential for developing safe and secure digital experiences. It is important for protecting user data, ensuring user privacy and autonomy, and creating digital experiences that are accessible and equitable for all users. In order to create successful digital experiences, ethical design must be taken into consideration at every stage of the design process.

Conclusion

Ethics by Design is an important concept for businesses to consider in order to ensure their operations are ethically sound. Through the creation and implementation of ethical principles, businesses can ensure that their products, services, and operations are designed with the utmost respect for the rights and dignity of all individuals. By recognizing the importance of ethical principles, businesses can create a culture of integrity, trust, and accountability, which will ultimately benefit their customers, employees, and the world at large.

This is a human-AI collaboration.

This blog is written by OpenAI, with slight human direction. Artwork by DALL E, driven by human prompts, and selected by humans from 4-8 alternatives.

Creating AI law

Technical Background

Written and illustrated by AI

This blog was written by OpenAI, using the AI ContentWriter plugin for WordPress.

The prompt given was merely the title “Creating AI Law” Writing Style was set to “Persuasive” and Tone was set to “Formal”.  Headings were generated by OpenAI. It was set to write introductory and concluding paragraphs.

Image by DALL E, using prompt “a robot policeman in a blue uniform pointing at the viewer in an office photorealistic.” Around 24 images and 6 alternative prompts were used before the human selected this image.

Introduction

Using the law to control AI is becoming an increasingly important issue as the technology becomes more prevalent in many aspects of life. AI technology is being used in a variety of applications, from healthcare to robotics, and even in self-driving cars. With its rapid growth, legal and regulatory frameworks are needed to ensure that AI technology is used responsibly and ethically. This article will discuss how laws and regulations can be used to manage AI, and how these laws can be enforced. It will also discuss the potential risks of using AI technology, and how those risks can be managed through legal frameworks. Finally, it will explore the potential for AI to help create a more equitable and just society.

Understanding the Ethical and Regulatory Implications of AI

As AI technology continues to grow in ubiquity, it is essential to understand the ethical and regulatory implications that accompany it. AI is a powerful tool and its potential to transform our lives is immense. However, this potential must be balanced with caution and ethical considerations. When developing AI systems, it is important to consider the potential implications of their use. AI systems can be used to make decisions that may have far-reaching consequences and it is essential to ensure that these decisions are made ethically. This means considering the safety, privacy, and fairness of the decisions that the AI will make. It is also important to understand the regulatory implications of AI. AI technology is changing rapidly, and governments and regulatory bodies must keep up with this change in order to ensure that AI is used responsibly. Regulations must be in place to ensure that AI is used ethically and that it is not used to harm people or manipulate them. AI has the potential to revolutionise the world and its use should be encouraged. However, it is important to understand the ethical and regulatory implications that accompany it. It is essential that AI is used responsibly and ethically, and this understanding starts with understanding the ethical and regulatory implications that come with its use.

Using the Law to Protect Consumers from AI-Generated Harm

As artificial intelligence (AI) continues to become more pervasive in our lives, it is important to consider how to protect consumers from potential harms caused by AI-generated decisions. The law has an important role to play in safeguarding consumers from the potential harms caused by AI. First and foremost, it is essential to ensure that AI-generated decisions are transparent and can be audited. Companies should be required to disclose the algorithms they use to generate decisions, allowing consumers to understand how the decisions are being made. This can also help to identify any potential bias that could be causing unfair treatment of consumers. Second, companies should be held liable for any harms caused by AI-generated decisions. This would provide an incentive for companies to ensure that their algorithms are producing fair and accurate decisions. If a company fails to adequately monitor their AI-generated decisions, they should be held liable for any resulting harm. Finally, companies should be required to provide consumers with a remedy if they are harmed by AI-generated decisions. This could take the form of a court-supervised process or an independent dispute resolution process. Consumers should be able to receive compensation for any actual losses suffered as a result of AI-generated decisions. Overall, the law can play an important role in protecting consumers from AI-generated harms. By ensuring transparency, holding companies liable, and providing remedies, the law can help to safeguard consumers and ensure that AI-generated decisions are fair and accurate.

How the Law Can Help Combat AI-Driven Discrimination and Bias

The use of artificial intelligence (AI) is rapidly increasing in every aspect of our lives, from healthcare to finance. However, AI systems can be susceptible to discrimination and bias, and if not addressed, these issues can have significant consequences. As such, it is essential that the law be used to combat AI-driven discrimination and bias. At the core of the law’s ability to combat AI-driven discrimination and bias is the concept of due process. By guaranteeing individuals the right to challenge decisions made by AI systems, due process ensures that those affected by these systems can seek redress if they believe that their rights have been violated. This can be done through a process of review, which allows individuals to present evidence and challenge the accuracy or fairness of the AI system’s decision. The law can also be used to ensure that AI systems are held accountable for their decisions. In particular, legal requirements can be put in place to ensure that AI systems are regularly audited and tested for potential bias, and that the data used to train them is accurate and unbiased. This can help to ensure that AI systems are not only operating in accordance with the law, but also that they are not discriminating against individuals or groups. Finally, the law can be used to ensure that individuals are aware of the potential risks of AI-driven discrimination and bias. For example, legal requirements can be put in place to ensure that individuals are informed of their rights when interacting with AI systems, and that they have access to resources to help them understand the potential implications of AI-driven decisions. In conclusion, the law has a crucial role to play in combating AI-driven discrimination and bias. By guaranteeing individuals the right to due process, holding AI systems accountable for their decisions, and ensuring that individuals are informed of the potential risks of AI-driven discrimination and bias, the law can help to ensure that individuals are treated fairly and that their rights are respected.

How the Law Can Help Shape the Development and Use of AI

The development and use of artificial intelligence (AI) can have profound implications for our society and the way we live our lives. As such, it is essential that we use the law to ensure that the development and use of AI is carried out in an ethical and responsible manner. The law can help to shape the development and use of AI in several ways. Firstly, the law can be used to help regulate the processes of AI development. This could include ensuring that developers are following best practices and standards for ethical AI development, as well as putting in place laws that govern the use of data and algorithms in AI applications. Secondly, the law can also be used to ensure that AI applications are used responsibly. This could include setting out restrictions on the use of data, algorithms, and AI applications, and creating laws that ensure that AI applications are used in a fair and transparent manner. Thirdly, the law can be used to ensure that the development and use of AI is carried out in a manner that is consistent with broader societal values. This could include creating laws that ensure that AI applications are not used to discriminate against certain groups of people, or laws that ensure that AI applications are used in a manner that respects human rights. Finally, the law can be used to create a framework for the governance of AI applications. This could involve setting out rules and regulations that govern how AI applications are developed, used, and maintained. This could also involve creating laws that create liability for developers and users of AI applications in the event that any harm is caused as a result of their use. Overall, the law can be used to help shape the development and use of AI in order to ensure that it is used in an ethical and responsible manner. By using the law to regulate the development and use of AI, we can help ensure that AI is used in a manner that is consistent with our broader societal values and that respects the rights of all individuals.

The Legal Implications of Autonomous AI Systems

Autonomous AI systems are increasingly becoming an important component of modern society, with applications in many different industries. However, there are significant legal implications that need to be considered when utilizing this technology. The primary legal issue concerning autonomous AI systems is the potential for liability. In the event of an accident or other issue, it is unclear who, if anyone, may be held responsible. This is particularly concerning when AI systems are used in safety-critical applications, such as self-driving cars or medical robots. Another legal issue is the potential for AI systems to be used in an unethical or illegal manner, such as using facial recognition software to monitor citizens without their knowledge or consent. In addition, there is the potential for AI systems to infringe on existing laws, such as copyright and patent laws. AI systems may be designed to generate content or products that are similar to existing works, which could potentially be a violation of copyright law. Additionally, AI systems may be designed to patent ideas or processes, which could lead to disputes over ownership of the technology. Finally, there is the potential for AI systems to become too powerful or too intelligent for humans to control. This could lead to a host of ethical and legal issues, such as AI systems becoming independent decision-makers or gaining too much control over people’s lives. As AI systems become more prevalent, it is essential that the legal implications of their use are carefully considered. Governments, corporations, and individuals should all take steps to ensure that AI systems are used in a responsible and ethical manner. It is also important to ensure that any potential legal issues are addressed before they arise, as the consequences of an AI-related legal dispute can be costly and difficult to resolve.

Conclusion

The use of law to control AI has the potential to be a powerful tool in protecting the public from the harms that AI can cause. However, it is important to recognize the limitations of using law to control AI, such as the difficulty in keeping up with the rapidly changing technology, the challenge of setting clear standards and guidelines, and the risk of over-regulating the technology. Ultimately, the use of law to control AI should be used in combination with other methods, such as education and ethical frameworks, to ensure that AI is used responsibly and ethically.

Can AI write good?

The following was written by OpenAI, using the AI ContentWriter plugin for WordPress.

The prompt given was merely the title “Can AI write good?” ‘Good’ was intended here to read like poor English (EG: “I speak good”) – but the emphasis on the ethical suggests OpenAI interpreted it as ‘ethical’.  Headings were generated by OpenAI.

Ethical Considerations Around AI Writing

As Artificial Intelligence (AI) continues to become a larger part of our daily lives, it is important to consider the ethical implications of its use in writing. AI writing can be used for various purposes, including news articles, research papers, and social media posts. It can be an efficient and cost-effective way to produce content, but it also raises important ethical questions. The use of AI to write content could potentially be used to spread misinformation, manipulate public opinion, or infringe on copyright. It is important to ensure that AI-generated content is accurate and unbiased. Moreover, it should not be used to take advantage of vulnerable people or to promote unethical practices. Additionally, there are issues of ownership and control when it comes to AI writing. AI-generated content should not be used without permission, and the rights of the original author should be respected. The original author should be given credit for the work and should be compensated if their work is used without permission. Finally, AI writing should be used in accordance with laws, regulations, and ethical standards. Organizations should create policies to ensure that AI-generated content is compliant with applicable regulations. They should also consider potential privacy, security, and ethical issues associated with this technology. In conclusion, it is important to consider the ethical implications of AI writing. Organizations should ensure that AI-generated content is accurate and unbiased, and that it is used with respect for the rights of authors. They should also create policies to ensure that AI-generated content is compliant with applicable laws and regulations.

How AI is Revolutionizing Creative Writing

Artificial Intelligence (AI) is revolutionizing the creative writing process in a number of ways. In particular, AI is helping to make the creative writing process more efficient, accurate, and comprehensive. AI can help to streamline the writing process by providing personalized feedback, helping writers to create more cohesive and consistent pieces. AI-driven writing assistants can offer personalized advice on grammar, punctuation, and style, allowing writers to fine-tune their work quickly and accurately. AI-driven editing tools can also help to identify and correct errors, which can save time and energy for the writer. AI can also help to enhance the accuracy of creative writing. AI-based programs can use natural language processing to detect patterns in language, allowing them to identify and correct mistakes. AI-based programs can also be used to generate more accurate and comprehensive outlines and character sketches. By providing writers with a better understanding of their characters and storylines, AI can help to improve the quality of the writing. Finally, AI can help to expand the scope of creative writing. AI-based programs can be used to generate dynamic storylines, allowing writers to explore different possibilities without having to spend time and energy developing them all from scratch. AI can also be used to create more detailed and immersive worlds, allowing writers to capture the nuances of their characters and settings in a more vivid and accurate way. In conclusion, AI is revolutionizing the creative writing process by providing writers with more efficient, accurate, and comprehensive tools. By streamlining the writing process, enhancing the accuracy of the writing, and expanding the scope of creative writing, AI is helping writers to create more engaging and compelling stories.

Exploring the Pros and Cons of AI Writing Assistance

Artificial intelligence (AI) has revolutionized the way we write and edit. AI-enabled writing assistance tools are now available to help writers produce accurate, engaging, and error-free content. While AI writing assistance can save time and produce high-quality content, it is important to understand the potential pros and cons of using this technology. Pros One of the major advantages of AI writing assistance is that it can help writers produce error-free content quickly and efficiently. AI programs can detect errors in grammar, spelling, and punctuation, allowing writers to focus on the creative aspects of their writing. AI programs can also be used to suggest synonyms and phrases, helping writers find the right words for their sentence structure. Another benefit of AI writing assistance is that it can help writers create content that is tailored to their target audience. AI programs can analyze a text and suggest words, phrases, or topics that are relevant to the reader. This can help writers create content that is more engaging and effective. Cons On the other hand, there are some potential drawbacks to using AI writing assistance. One of the main concerns is that AI programs may not be able to accurately detect errors in complex or metaphorical language. This could lead to inaccurate edits or revisions that could potentially damage the original writer’s intended meaning. Additionally, AI programs may not be able to recognize subtle nuances or cultural references. This could lead to content that is not appropriate for a particular audience or that does not accurately reflect the writer’s point of view. Conclusion In conclusion, AI writing assistance can be a powerful tool for writers. However, it is important to understand the potential pros and cons of using this technology before using it. By taking the time to understand the potential drawbacks and advantages of AI writing assistance, writers can ensure that their content is error-free, engaging, and appropriate for their target audience.

AI and Natural Language Processing in Writing

Artificial Intelligence (AI) and Natural Language Processing (NLP) are technologies that are transforming the way humans write and process language. AI and NLP are enabling machines to understand and interpret language in much the same way that humans do. This technology is being used to create more efficient and accurate writing processes, as well as to improve the quality of written communication. AI and NLP technologies are used in a variety of ways in the field of writing. For example, they can be used to automate the process of proofreading documents. By using algorithms, AI and NLP can identify typos, grammar mistakes, and other errors in written documents. This can help to ensure that documents are error-free and of high quality before they are published. AI and NLP can also be used to help writers create more engaging and persuasive content. By analyzing the content of a document and understanding the context in which it was written, AI and NLP can suggest changes and improvements to the text. This can help writers to craft more persuasive and effective arguments, as well as to create more interesting and compelling content. Finally, AI and NLP can be used to help create content that is tailored to the needs of a specific audience. By analyzing the characteristics of a target audience, AI and NLP can suggest ways to adjust the content to better suit that audience. This can help writers to create content that is more effective and relevant to the target audience. Overall, AI and NLP are revolutionizing the way humans write and communicate. They are enabling machines to understand and process language more efficiently and accurately, as well as to help writers create more engaging and effective content. AI and NLP are quickly becoming essential tools in the field of writing, and their impact is likely to continue to grow in the future.

Exploring the Benefits of AI in Content Creation

The world of content creation is constantly evolving and Artificial Intelligence (AI) has been a major driving force in this evolution. AI has been used to automate and streamline content creation processes, enabling organizations to produce higher quality content faster and more efficiently. In this article, we will explore the various benefits of incorporating AI into content creation. First, AI can be used to produce content at a much faster rate than traditional human-driven methods. AI-powered content generators can quickly generate content from a variety of sources, such as news articles, blogs, and social media posts. This allows organizations to produce large amounts of content quickly, thus allowing them to stay ahead of their competitors. Second, AI can improve the quality of content by making it more personalized and engaging. AI can analyze user data to understand a user’s interests and preferences, and then use this information to generate content that is tailored to the user’s unique needs. This allows content to be more relevant and engaging, thus increasing the likelihood that it will be shared and consumed. Third, AI can help reduce costs associated with content creation. AI-powered content generators require minimal human input, thus reducing the need for manual labor and associated costs. Additionally, AI-generated content can be used to supplement existing content, thus reducing the need to pay for expensive content creation services. Finally, AI can be used to automate the process of content marketing. AI-powered tools can be used to analyze user data and target content to specific user segments. This allows organizations to be more precise with their content marketing efforts, thus increasing the chances of success. In conclusion, the use of AI in content creation offers numerous benefits. AI-powered content generators can be used to quickly produce large amounts of high-quality content, which can be personalized for a more engaging experience. Additionally, AI can help reduce costs associated with content creation and enable organizations to target their content more effectively. As AI continues to evolve, it is clear that it will continue to revolutionize the world of content creation.

AI for Everyone

A new book on the ethical issues of Artificial Intelligence has been published by the University of Westminster.  I am thrilled it has been published under Open Access, so is free to any who want it.  This is how academic research should be done!

You can download a copy of this excellent book here, or from the University of Westminster website.  Please share it around!

AI for Everyone? Critical Perspectives

Edited by Pieter Verdegem

This book is distributed under the terms of the Creative Commons Attribution + Noncommercial + NoDerivatives 4.0 license. Copyright is retained by the author(s)

This book has been peer reviewed.

ABSTRACT

We are entering a new era of technological determinism and solutionism in which governments and business actors are seeking data-driven change, assuming that Artificial Intelligence is now inevitable and ubiquitous. But we have not even started asking the right questions, let alone developed an understanding of the consequences. Urgently needed is debate that asks and answers fundamental questions about power. This book brings together critical interrogations of what constitutes AI, its impact and its inequalities in order to offer an analysis of what it means for AI to deliver benefits for everyone. The book is structured in three parts: Part 1, AI: Humans vs. Machines, presents critical perspectives on human-machine dualism. Part 2, Discourses and Myths About AI, excavates metaphors and policies to ask normative questions about what is ‘desirable’ AI and what conditions make this possible. Part 3, AI Power and Inequalities, discusses how the implementation of AI creates important challenges that urgently need to be addressed. Bringing together scholars from diverse disciplinary backgrounds and regional contexts, this book offers a vital intervention on one of the most hyped concepts of our times.

DOWNLOAD AI FOR EVERYONE

ORIGINAL https://www.uwestminsterpress.co.uk/site/books/e/10.16997/book55/

Conference – Ethics, Human Rights & Emerging Technologies 10-12 March

Ethics, Human Rights & Emerging Technologies 10-12 March

Save the date for our final (online) event on Ethics, Human Rights & Emerging Technologies on March 10, 11 and 12! In the conferencewe will discuss ethical and human rights issues raised by emerging technologies, and will assess the new methods and instruments we need for ethical guidance and governance of emerging technologies.

Over three days, we will address instruments like regulation, innovation policies, research ethics frameworks, Ethics by Design methodologies, education and training progammes, standards, and certification. Here, we will present and discuss results of the 3,5 years of work that the SIENNA project has conducted. The conference consists of four parts, which can be attended separately.

Watch this space for preliminary programme and registration information, or sign up to the SIENNA newsletter to make sure  you receive the invitation.

Artificial Intelligence and Robotics:  Ethical, legal and human rights challenges.

In this part of the conference, we will discuss how ethics is currently being used and integrated in AI practices and institutions, diagnose problems and challenges, and will propose ways to move forward.  We will discuss new research ethics frameworks for AI, new regulatory and policy proposals, Ethics by Design methodologies, the role of stakeholders and the general public in responsible innovation for AI, the role of critical and social studies of AI in ethics and policy, and education and training programmes for ethics of AI.

Human Genomics:  Ethical, legal and human rights challenges

In this session, we will discuss the new ethical challenges brought by new human genomics technologies, particularly new technologies for genome sequencing and editing, and their various uses in diagnosis and therapy.  We will discuss regulatory challenges for human genomics, and possible solutions for them, and we will propose new instruments for ethical guidance and governance of genomic technologies, including a new SIENNA-initiated code of conduct for international data sharing in genomics.

Human Enhancement:  Ethical, legal and human rights challenges

In this session, we will discuss the ethical challenges emerging human enhancement technologies, and assess the current state of the field and the steps that should be taken to meet ethical challenges in the future.  We will present and discuss the new SIENNA-initiated ethics guidelines for human enhancement, the first extended set of guidelines for human enhancement research, development and use that have been proposed, and we will discuss needed new policy initiatives that are needed for responsible innovation of human enhancement technologies.

Assessment and Guidance of new emerging technologies

In this final session, we will propose new approaches for ethical and human rights assessment, guidance and governance of emerging technologies.  What steps do different actors need to take to enable responsible innovation, and what tools and methods do they need?  We will present and discuss proposals for integrating ethical guidance with regulation and policy, including stakeholders in ethical analysis and governance, for combining ethical analysis with foresight and social impact assessment, for the development of general and domain- and actor-specific ethics guidelines, standards and certification, and for including Ethics by Design methods in technology development.

Public consultation on ethical guidance for genomics, human enhancement, AI & robotics

SIENNA Public consultation on ethical guidance for genomics, human enhancement, AI & robotics 11-25 January

SIENNA invites you to participate in a public consultation on our proposals! Between 11-25 January we invite you to a public consultation of a group of documents with concrete ethical guidance for human genetics and genomics, human enhancement, artificial intelligence and robotics! Find out more about the process here, or go straight to the proposals you are interested in (listed below).

Human genomics

  • Operational guidance for ethical self-assessment of research

Human Enhancement

  • Ethical guidelines for research, development & application of human enhancement and proposals for the creation of an appropriate body to oversee and analyse trends towards human enhancement, assess moral and social consequences, provide information and advice.

Artificial intelligence & robotics

  • Ethics by design & ethics of use approaches for AI, robotics & big data
  • Industry education and buy-in
  • AI ethics education, training and awareness raising
  • Ethics as attention to context: recommendations for AI ethics

The documents that are now made public for feedback include:

The PIR sale

UPDATE: We won! After being warned by the California Attorney General ICANN could be in danger of violating its constitution by letting the sale go through, ICANN denied permission for the sale.

If you don’t know the background to the PIR sale and why it’s a BIG THING, here are a few links to fill you in.

Four US senators sent a letter to the Internet Society raising a number of questions regarding the sale of the Public Interest Registry to Ethos Capital.  Their letter can be found here: https://www.wyden.senate.gov/news/press-releases/wyden-blumenthal-warren-and-eshoo-question-sale-of-org-domains-to-private-equity-firm

Here’s ISOC’s response: https://www.keypointsabout.org/blog/the-internet-society-pir-and-ethos-respond-to-congressional-letter

Other background can be seen here:

Sam Klein published an article that summarises well many of the unanswered questions and concerns that ISOC members have about the sale of PIR:http://blogs.harvard.edu/sj/2019/12/02/the-dot-org-fire-sale-sold-for-half-its-valuation/

It’s a PRIVATE vs PUBLIC debate regarding internet governance

I think there are two different perspectives in conflict over this issue.

Perspective 1: the PIR is a business with customers

Perspective 2: the PIR is a public body, like a public utility, with users, not customers. 

I’ve spent much time in both Europe and the USA, and I think the two regions differ strongly over public services being provided by commercial companies.  This is best seen in their attitudes to public health.  Most Europeans regard the US provision of health care via profit-oriented businesses as almost insane, morally questionable (or even just plain evil) and blatantly inefficient; while most US citizens see nothing wrong with it, and either wonder how any government can possibly operate an effective health care system (because, in their view, only the profit motive makes for efficiency) or have a reflexive aversion to anything labelled “socialist.”

The role of the PIR as providing services for the non-profit sector further reinforces the perceptions of the “public service” people because non-profit is largely public service oriented.  For them, the PIR is not a business,  but a public body.  It’s like calling NASA a business.  For the public service perspective, this is a huge change.  So for these people, the appropriate model for evaluating the sale is the type of deal a government would do in awarding a contract for delivery of public services – generation of support from the stakeholders by explaining the case, consultation on solutions, public tenders with appropriate transparency, formal mechanisms for challenging the decision etc. 

From my experience of US culture, there is much less value given to government provision of many services, and much more faith that commercial businesses will act ethically, than I see in Europe.

This is why we see continual arguments over the process.  Those supporting the sale use the model of business negotiations, while those opposing it use the model of government tenders.

From the “public service perspective”, the primary responsibility of the PIR is to meet the needs of the user, and making profit is important only to the degree it helps meet those needs.  From this perspective, if it becomes profit-oriented, the PIR’s responsibility has to be that of deriving more value from registrants than it delivers, in order to pass that profit to shareholders.   We saw many arguments about this point.  The defence has been that the PIR will have to meet the registrant’s needs in order to make a profit.  However, the concern is that we have to trust it to do so, we can’t force it.  Many, especially in Europe, simply don’t trust any business that much – because it’s a business.  Furthermore, moving to a for-profit model introduces this potential conflict within PIR between service level and profitability.  Whether that conflict ever reaches the point of service decline is impossible to predict.  However, the risk doesn’t exist at all if the PIR is non-profit.  So the sale creates a new risk.  Many don’t think it’s acceptable for ISOC to create such risks. 

Ethos has made public moves to address this concern with talk of B-Corp certification and the Stewardship Council.  However, they have not made specific binding commitments.  They have made statements, but there is nothing legally holding them to any course of action.  Furthermore, Andrew has specifically stated there are no undisclosed terms in contracts binding Ethos, and that what Ethos does once the sale is complete will no longer be an area of ISOC responsibility.  Ethos haven’t actually formally committed to going B-Corp, but it wouldn’t actually address the issue anyway because B-Corp or B-Cert doesn’t stop you profit-gouging, reducing services etc.  So B-anything won’t legally bind to the degree necessary to address the fears.  For many, the Stewardship Council has not adequately addressed this issue either.  Firstly, Ethos appoint the initial members of the council, who then appoint themselves forever.  There’s no transparency and no representation of the stakeholders.  Secondly, Ethos have not specified what formal control the council will have over PIR decisions.  As far as I understand from their webinar appearances, it will be largely advisory.  But they have specifically stated the Stewardship Council will not have veto over price increases.  So the only concrete commitment Ethos have made about the council is that it won’t have the power to prevent the thing they are offering it as a solution to.

From public service perspective, arguments about giving the PIR the ability to expand services, increase customers etc, are irrelevant to assessing the deal.  To the public service view, not only do such arguments fail to justify the sale, making these arguments at all demonstrates an incorrect understanding of what the PIR is – a public service.  Making these arguments as a response to the public service perspective makes matters worse because it shows a failing to understand why the deal is considered objectionable by the public service group in the first place.  Pointing out .orgs have been managed by for-profits in the past simply repeats the same error – it’s like saying “things were bad before so it’s ok if we make them all over bad again.”

From the public service perspective, this is also a significant change in the overall structure of the internet.  Domain names are the root of the whole tree.  Domain names are the gateway to presenting yourself to the internet.  The ability to add or remove an organisation from the web is a significant power in a global society.  The effects of how and what is done within domain registration have subtle but huge repercussions throughout the web.  From this perspective, there is a structural need for some part of domain name registration to operate in a non-commercial fashion.  The reasons why are too many for here, but can be summarised as “a healthy society is impossible if all services are provided by commercial organisations.”

If you have a distrust of commercial provision of public services, multi-level shell corporations, expansive NDA’s and hiding director identities, all look highly suspicious.  It might be business as usual, in business.  But in a public service, expectations are different and so these business activities are themselves proof that the deal is inappropriate.

As I see it, the only way the public service perspective could be satisfied would be if there were legally binding limits on what Ethos could do.  These would have to cover selling the PIR to another, probably something about who could be a shareholder, powers of the board, etc, and .org price changes.  These commitments would have to be far more than most businesses would agree to and Ethos have shown every sign of avoiding such commitments.  

Those behind and supporting the sale, I suggest, do not see things this way.  For them the PIR is a business, making a profit is fine, it’s no big deal if every registration authority is a commercial enterprise.  For them, this is not a significant structural change in the internet.  I suggest this type of thinking is fairly common in the USA, but rare throughout most of Europe. I suggest one or the other perspective would dominate in most countries.

As a global organisation, I don’t think ISOC should pick a side between these two perspectives.  I think ISOC leadership needs to understand both perspectives, and I think it needs to give space to both.  People in ISOC governance need to recognise some members are ardent Marxists who would introduce a socialist state if they could, yet would be considered conservative amongst their even more Marxist community.  Other members would like to limit the role of profit-making companies to very limited sectors of the economy.  Other members think Marxism is as close to evil as a political ideology can be.  Some members would always trust the government to deliver a better service than any commercial company, while others assume every government enterprise must be blotted and inefficient.  ISOC has to accommodate all these extremes.  I do not see signs that it has done so.  My reading of the first emails on this issue is that ISOC leadership were surprised by the objections of the public service perspective, which indicates they had not realised this way of thinking existed within ISOC.  The arguments of about describing the PIR as a business, denying this was a significant change to overall internet governance, etc, suggest a way of thinking so deeply immersed in the commercial perspective that the public service perspective is almost incomprehensible.  I am not suggesting they should have agreed with it, merely they should have anticipated it, should understand it, and should be able to engage with it rather than simply asserting the opposite.  There’s no point having a global mission to connect everyone if we’re not going to use that as an opportunity to understand each other better.  And as a global organisation, ISOC needs to accommodate the widest range of perspectives possible.