Innovation Avenue: Generative AI
In our first topic we look at Generative Artificial Intelligence (AI). If there currently are any AI related buzz words it must be ChatGPT or its imaging counterpart, for example Stable Diffusion, Dall-E or MidJourney.
Who cannot recall the arrest of Donald Trump or The Pope in Balenciaga Drip winter jacket.
Generative AI is a technology that learns from existing examples to create new, realistic content on a large scale. It can make things like images, videos, music, speech, text, software code, and product designs that are original but inspired by what it learned.
This technology uses advanced techniques that are always getting better. One important method involves training big AI models on lots of data without labels, then fine-tuning them for specific tasks. Making these models needs complicated math and lots of computer power, but they basically predict what comes next.
Right now, generative AI often makes things when you ask it in normal language — no coding needed. But businesses can use it for many things, like inventing new drugs, designing computer chips, and developing new materials.
Overall, while generative AI holds immense potential for innovation and creativity, its negative reputation stems from legitimate concerns about its ethical, social, and legal implications. Addressing these concerns requires a collaborative effort from researchers, policymakers, industry stakeholders, and civil society to develop robust safeguards, promote responsible practices, and mitigate the risks associated with AI technologies. In 2016 Cathy O’ Neil already wrote a highly recommended book called Weapons of Math Destruction.
“Weapons of Math Destruction” explores the negative impacts of algorithmic decision-making systems, which she refers to as “mathematical models that are opaque, unregulated, and destructive.” There are several similarities between the issues raised in O’Neil’s book and the negative attention surrounding Generative AI.
Opacity and Lack of Transparency
Both algorithmic systems and generative AI models can operate in opaque and non-transparent ways, making it difficult for users to understand how decisions are made or content is generated. This lack of transparency can lead to concerns about accountability, fairness, and bias in both cases.
Amplification of Bias and Inequality
O’Neil highlights how algorithmic models can perpetuate and amplify biases present in the data used to train them, leading to unfair or discriminatory outcomes, especially for marginalized communities. Similarly, generative AI models can inherit and propagate biases present in training data, resulting in biased or inappropriate content generation.
Impact on Individuals and Society
Both “Weapons of Math Destruction” and concerns about generative AI emphasize the potential negative impacts on individuals and society. O’Neil discusses how algorithmic systems can harm individuals by denying them opportunities or reinforcing societal inequalities. Similarly, generative AI models can produce harmful content, such as deepfake videos or misinformation, which can have serious consequences for individuals and society at large.
Lack of Regulation and Oversight
O’Neil highlights the lack of regulation and oversight surrounding algorithmic decision-making systems, which allows harmful practices to go unchecked. Similarly, the rapid development and deployment of generative AI models have outpaced regulatory frameworks and ethical guidelines, leading to concerns about responsible use and potential misuse of the technology.
Need for Ethical Considerations
Both discussions around algorithmic systems and generative AI emphasize the importance of ethical considerations and responsible practices. O’Neil advocates for greater transparency, accountability, and fairness in algorithmic decision-making, while proponents of generative AI stress the importance of ethical design, bias mitigation, and adherence to ethical guidelines in content generation.
In summary, the concerns raised in “Weapons of Math Destruction” share similarities with the negative attention surrounding generative AI, particularly regarding issues of opacity, bias, societal impact, regulatory oversight, and ethical considerations. Both highlight the need for greater awareness, accountability, and ethical stewardship in the development and deployment of algorithmic systems and AI technologies.
So, with all these negative attentions surrounding this technology, what are some Do’s and Don’ts for your business? In this post we look at a single Do and a single don’t.
- Do understand what your business needs.
- Don’t underestimate the importance of data privacy.
Do understand your business needs and how generative AI can address them.
Overall, generative AI has the potential to revolutionize various aspects of manufacturing operations by enabling companies to innovate, optimize, and adapt to changing market dynamics more effectively. By leveraging the capabilities of generative AI, companies can gain a competitive edge and drive sustainable growth in today’s rapidly evolving business landscape.
Patrick Strauss addresses the generative AI possibilities for creating Digital Twins. A digital twin is the digital representation of a physical asset. It uses real-world data (both real time and historical) combined with engineering, simulation, or machine learning (ML) models to enhance operations and support human decision-making.
Generative AI is set to revolutionize digital environments and digital twins by transforming them into “Intelligent Environments” and “Intelligent Twins.” This integration will be made possible by the augmentation of the human workforce through generative AI, which can even enable “self-driving” digital twins and environments.
Some example area’s where generative AI can be beneficial are:
- Product design and prototyping
- Optimizing the manufacturing process
- Supply-Chain optimization.
Product Design and Prototyping
Generative design aims to create new shapes, including shapes you may never have thought of, shapes that offer a weight advantage over traditional shapes, by efficiently putting material where it is needed and removing it where it is not. The VW Microbus drew a constant stream of visitors after being promoted from the main stage.
Generative AI algorithms can assist in designing and prototyping new products by generating diverse and innovative design concepts based on specified parameters and constraints. Companies can use generative AI to explore a wide range of design possibilities quickly and efficiently, allowing for rapid iteration and optimization of product designs.
Optimized Manufacturing Processes
Generative AI can analyze production data and identify opportunities to optimize manufacturing processes, such as improving production efficiency, reducing waste, and minimizing downtime. Generative AI can help optimize material usage and resource allocation in the manufacturing process.
By analyzing material properties and production constraints, generative AI algorithms can suggest design modifications and material substitutions to minimize waste, reduce production costs, and improve sustainability. By leveraging generative AI algorithms, manufacturing companies can discover optimal process parameters and configurations that maximize productivity and minimize costs.
Supply Chain Optimization
Generative AI can help optimize supply chain operations by forecasting demand, identifying potential bottlenecks, and optimizing inventory levels. By analyzing historical data and real-time information, generative AI algorithms can provide valuable insights into supply chain dynamics, enabling companies to make informed decisions and improve overall efficiency.
Don’t underestimate the importance of data privacy and ethical considerations with generative AI
Your staff is probably already experimenting with ChatGPT to help with their work tasks. Instead of outright banning its use, Gartner suggests creating a clear policy to prevent hidden usage and ensure compliance.
Make the policy straightforward. It could be as simple as three things not to do and two things to do, especially if you’re using tools like ChatGPT or similar off-the-shelf models:
- Don’t enter any personal information.
- Don’t input any sensitive data.
- Don’t input any company secrets or intellectual property.
- Remember to disable history if using external tools (like ChatGPT) that offer that option.
- Keep a close watch on the results. Sometimes, they can contain subtle mistakes, inaccuracies, or biased statements.
It all starts with awareness.
To prevent entering personal information into platforms like ChatGPT or similar generative AI models, educate users about the importance of protecting personal information and the potential risks associated with entering sensitive data into AI-powered platforms. Provide clear guidelines and examples of the types of information that should not be shared. Clearly outline what types of information are considered confidential or sensitive and should not be entered into the platform.
Encourage users to anonymize or generalize data inputs when interacting with generative AI platforms. Instead of using real names, addresses, or other personal identifiers, encourage users to use generic placeholders or fictional examples.
Beside the previously mentioned education and training some access controls and permissions to restrict who can interact with ChatGPT within your organization can be implemented. If tools like ChatGPT are accessed through a proxy service, for example Infor OS ION API gateway logging and monitoring can be provided.
Regular reviews and analysis of interactions and results generated by your staff can be conducted. content. Look for any patterns or anomalies that may indicate misuse or compliance issues.
Charles Korthout
Solution Architect at partners 2 innovate
Join us on LinkedIn for a front-row seat to innovation, where ideas take flight and connections spark greatness!