Is AI Creating Values for Startups?

Only the world’s top businesses could afford to invest in AI a decade ago, but things have changed drastically in the last 5-6 years. AI has become the new normal due to the advanced modernization on various fronts. Today, startups have more opportunities than ever to leverage AI for countless products and services that can bring solutions to their stakeholders on multiple fronts.

Some of the areas that startups are using AI to bring value to their process and stakeholders are:

Creating Efficiencies:

  1. Automation of processes (Complex processes have been automated, reducing the cost of the business)
  2. Analytics (Huge sets of data can be analyzed and better strategies can be developed)
  3. Development efforts (Services for code review, deployments, QA)

Increase Productivity:

  1. Manufacturing – (AI can validate whether intricate goods like microchips have been perfectly produced)
  2. Predictive & Preventative Maintenance – (AI is used for identifying PM schedules that can be optimized prior to the predicted breakdown, which avoids downtime)
  3. Production Optimization – (AI will constantly learn from all production data points to continuously improve process parameters.)

Improving Customer Experience:

  1. Better insights using AI – thus creating better product preferences for customers
  2. Customer support – Chatbots (A study says, about 65% of the agents working AI-based chatbots were able to spend more time on Complex customer problems and solve them faster.)
  3. Consumer Insights – (AI can help create customer personas, match customers’ preferences of products they are more likely to buy, and display the most relevant content to readers)

With the exponential advent of AI and its applications – it is envisaged to eliminate mundane day-to-day business operations and enhance the efficacy in business processes at startups. AI solutions are more cost-effective than other traditional methods as they take a systematic approach for every aspect. AI is a cutting edge of Innovation.

At Deep Camp, we mentor startups on various possible ways that will improvise their processes to yield better outcomes through AI. With a focus on scalability through AI, we mentor them on numerous business functions like Marketing, Sales, Finance, Operations, HR, and Strategy.

Deep Camp is a startup accelerator program that focuses Tech Businesses. Potential entrepreneurs are provided access to Affine’s Centre of Excellence teams (CoEs) across AI, Engineering & Cloud to build bleeding-edge solutions and find swift market access.

Optimizing Inventory with the Power of AI

Inventory management is a critical aspect for businesses – those that are required to store products for the ultimate purpose of sales. Stocking the right number of goods at the right place and time, taking into consideration the scenarios of demand and supply is vital in order to fulfil consumer expectations in a timely manner, reduce wastage, stay efficient, and in turn, earn substantial profits. Needless to say, the advent of breakthrough technologies such as Artificial Intelligence (AI) has simplified and revolutionized inventory management, enabling business owners to act diligently and optimize their stocks, right from the manufacturing stage till product distribution.

Uncertain Times Call for Stronger Measures

We are in the midst of uncertain, tough times led by Covid-19, which took businesses by surprise and jolted the operations of many. Ever since, demand and supply have been prone to constant fluctuations and consumer behaviour is seen to always be always new turns. Understanding and optimizing inventory has emerged as a grave challenge, and businesses are actively on the look-out for solutions to effectively track and manage their stocks. Artificial Intelligence, with its potential to churn and distil large volumes of data, provides cutting-edge inventory insights and visibility to businesses, enabling them to enhance their revenues, customer experience and brand image. 

AI’s capability to leverage large swathes of real-time inventory control dynamics that affect inventory stock levels differentiates it from traditional tools. AI can predict scenarios, recommend actions and even act — independently or with human approval. Take the example of a digital twin with a global view of all suppliers, manufacturers, transportation, warehouses, and retailers. Data from IoT devices such as GPS and RFID tags, business applications like ERP and WMS, and third party sources can be interconnected to model, monitor, and manage real-world supply chain environments, giving a real time view into product requirement.

The Positive Impact of AI-Led Inventory Optimization

Let us understand how the use of AI can provide positive outcomes across various scenarios in inventory optimization. 

a) Analysis of consumer shopping behavior – In today’s unpredictable times, the buying behavior of the consumer is no longer constant. For businesses, it means staying agile and alert at all times to determine what will approve to the consumer at any given point of time. AI, with its outstanding abilities, intelligently slices the wealth of consumer data, including their purchase history, browsing patterns, social media acts, etc to accurately catch the pulse of the consumer behavior. These powerful behavioral analytics empowers businesses to determine their stocks and quantities effectively.  

b) Accurate prediction of demand – The multiple datasets generated by businesses are intelligently harnessed by the AI technology to identify forthcoming market demand patterns with greater ease and accuracy. With Machine Learning (ML), real-time data can be easily leveraged to accurately predict demand and thereby determine how much inventory is actually needed to fulfill this demand. This helps businesses to take care of out-of-stock and over-stock issues intelligently. As per McKinsey, AI-powered demand forecasting has the potential to reduce supply chain errors by 30-50%

c) Improved Warehouse Management – With the correct amount of goods being held at the warehouses, the space logistics and productivity get automatically optimized. Over-stocking entails huge costs and also eats chunks of storage space, which gets duly resolved with the help of AI-powered insights.  As a result, the warehouse teams are able to operate efficiently, which brightens up the prospects of growth. McKinsey highlights that the use of AI reduces warehousing costs by approximately 10-40%

d) Time Management – As the machines take over to process large amounts of otherwise inaccessible data, businesses are able to lay hands on important information such as expected time of arrival of any particular good/ goods, which might be out-of-stock. The same can then be communicated to the customers, which helps strengthen the brand-customer relationship.

e) Scalability – AI-based inventory management, with its accurate forecasting capabilities, allows businesses to respond instantly to any unexpected fluctuations in demand or supply, thereby allowing them to scale their stock up or down. This ability to act in real-time helps businesses to provide quick, quality services to their consumers and also help them clear their stock profitably. 

The right solution tackles the problem in a right manner at the right time.

Enter, Affine

Affine’s new-age, AI-powered solution for Inventory Replenishment System helps businesses improve their inventory level efficiency by optimizing a wide range of variables.

  • Demand Forecast – Accurate demand predicting algorithms used by Affine allow businesses to know what is needed at what time.  
  • Lead Time – Intelligence on lead time helps business combat issues such as delayed supply or low inventory. 
  • Inventory Carrying Cost – The cost of holding unwanted inventory automatically gets eradicated, since the AI-powered smart insights allow just the right inventory to be held in warehouses. 
  • Ordering Cost – Ordering just the right inventory keeps the ordering costs, including the overheads in control. 

In fact, Affine has extensive experience in implementing AI and ML solutions across verticals in supply chain management. We have empowered a carrier fleet with route optimization, optimized inventory for a large shoe manufacturer store, demand forecasting for a coffee giant, and improving on-time delivery for an e-commerce giant. These are just the tip of the iceberg that is our experience and expertise in AI-led optimization for organizations across sectors and verticals. 

Before we go…

Inventory optimization leveraging AI is all set to take the business world by storm. With the large swathes of data sets now available to organizations, their investment in cutting 4IR technologies, and unpredictable consumer behaviour are all leading this change from the front.

If you too are on the look-out for an advanced inventory optimization tool, experience the world of AI with Affine and elevate your inventory experiences.

About The Author(s):

The blog is a result of research efforts conducted by Affine’s Manufacturing CoE team, our Centre of Excellence which exists for the sole purpose of hyper-innovation in the manufacturing space. The Manufacturing CoE is a dedicated in-house team responsible to continuously innovate manufacturing solutions and services powered by AI, AE & Cloud capabilities. Our enterprise-grade solutions are new, hot, happening, futuristic and the next big thing in Industry 4.0. 

Real Time Computer Vision Ushering in a New Era in Speed and Agility

With rapid evolution in the compute engines and algorithms in AI, Computer Vision (CV) solutions are becoming increasingly more accurate and are seeing a high rate of adoption across industries. And, with the innovations that cloud partners like AWS provide for close-to-real-time computation, a lot of companies are integrating CV solutions to cater to all their on-site monitoring and surveillance needs. The technology is growing at a rapid rate and has a lot of potential to benefit a number of industries in significant ways. 

Impacting Organizational Operations with Real Time Capabilities

Computer Vision is proving to be extremely helpful in operations across businesses. The speed of production has gone up without compromising on the quality of the products. Today, as the scale and expectations around quality are going up, there is no to very little room left for errors. With CV, businesses are able to have an upper hand on monitoring the parts of supply chain in real-time. The technology is also helpful in enhancing accuracy in process control, product visibility and timely identification of lost inventory.

CV has made many factory-based jobs safer and more efficient than ever before. For example, missing safety gear or any presence in a restricted area can be detected and alerts generated in real time to save lives and property on the shop floor.

Also, there are many manufacturing jobs that demand manual quality check/control for defect identification. This can be erroneous, at times. With the deployment of CV machines, defects or errors in products can be tracked in real time, ensuring smooth quality assurance. 

Clearly, Computer vision is enabling organizations with speed, quality, safety, and efficiency. It is an idea whose time has come in the industrial sector.

Securing Nations and Citizens, Not Just Businesses

Computer Vision has been a revolutionary technology in the security segment. This applies to all the aspects of security – defence or business. The use of Computer Vision techniques such as facial recognition and biometric authentication has helped banks strengthen their security and deal with fraud. If we speak about manual security checks, it is nearly impossible to take care of safety and security in today’s time when things are moving with the blink of an eye. The cutting-edge technology of CV has made it easier for businesses quickly spot the error or risk. 

Implementation of CV has also been a phenomenal in the military. It has enabled armed forces to elevate their operations through surveillance with better security to soldiers and citizens. CV is also being used to power truly powerful autonomous military vehicles and weapons such as drones and robots. CV-based algorithms help in defining the target and deciding on prompt and appropriate action.

It wouldn’t be incorrect to say that CV has an augmenting impact, not only organizations and industries but entire nations and civilizations.

Reduction in Latency with Edge Computing

Latency is neither a new business issue nor a small one. But thanks to edge computing, things are resolving and marching in a better direction. Real Time Computer Vision has provided ground-breaking solutions, and edge has brought down the latency rate. The speed of alert and response generation is seen to be more than 5 times faster with the latency coming down from a few seconds to a few hundred milliseconds or even lower. This is a very huge technological accomplishment with far-reaching business implications.

Affine Leading the Way in Real Time Computer Vision

Affine is proud to share that it has been recognised as a Preferred Global Service Partner for AWS Panorama. With this collaboration, Affine’s expertise can be leveraged in the faster deployments of Computer Vision based applications, enhancing security, innovation, and business excellence.

Talk to our experts on CV solutions

Usher in Quality 4.0 With Digital Quality Management System

The technology revolution of the last decade is ushering in a new industrial revolution – it is called Industry 4.0. Disruptive technologies, exponential growth, superfast manufacturing are some of the key indicators of these new times. However, to truly stay true to the revolution and to make the most of it, manufacturers around the world need to invest in Quality 4.0. What is Quality 4.0? According to KPMG, “Quality 4.0’ is a state of transformation that references the future of organisational excellence and quality within the context of Industry 4.0. Quality 4.0 combines the capabilities of Machine learning, Artificial Intelligence, Cloud Computing and Big Data with conventional systems of quality management for driving continuous process improvement and for improving overall business performance.” The reason for Quality 4.0 is simple – in a world where consumers have unprecedented choice, errors and compromise on quality could cost organizations their reputation and revenue. At Affine, we believe in the power of Digital Quality Management System at the core of Quality 4.0.

Digital Quality Management System, Protecting Manufacturers’ Reputation and Revenue

Quality Management Systems (QMS) is a key element in Industry 4.0. All manufacturing organizations need it, as it enables manufacturers to electronically monitor, control and record into documents of their quality processes. This in turn ensures that their products are manufactured within high quality parameters, complying with all applicable standards, and do not contain any defects in the outflow of the product. In order to become a core element of Quality 4.0, a sound and strong Quality Management System takes into account people, process and technology to deliver the best manufacturing outputs.

A quality management system typically has three key aspects:

  • Organize: Organizing is the process of translating policies that describes quality procedures, processes, instructions & segregation of the contents that are defined.
  • Analyze: Analyzing is the process of transforming the policies into processes and instructions to achieve the computed defined standards.
  • Finalize: Finalizing is the process of transcription that take actionable measures by systemic and methodical approach.

The Different Approaches and Scopes for Digital Quality Management System 

Defect Detection, digital QMS for defect detection works with quick identification of defective or anomalous defects on the complex physical surfaces by leveraging the deep learning technology using advanced vision system. This helps the manufacturing sector reduce rejection costs. Detection of defects in early stages of manufacturing helps reduce operational costs.

With Data Analytics, digital QMS models achieve predictive quality data management capabilities by leveraging Artificial intelligence (AI), Machine Learning (ML), Natural language processing (NLP), Intelligent automations etc. These emerging technologies ease the production process and ensure that the end customer always receives defect-free products.

Digital QMS can be used on top of MES, ERP, LIMS, and other softwares to expand the capabilities within the organization that are strategic to business which provide complete visibility. The improved knowledge from these helps in new product design and development and effectively optimize the production process in the shop floor. The combination of digital QMS with other industry softwares gives real-time market evaluation with respect to various products, minimizes negative brand exposure and also decreases waste helping reduce costs.

The hallmarks of a sound Digital Quality Management System include:

  • Global visibility across distributed operations
  • Enforcement of process to ensure compliance.
  • Event monitoring and early trend escalation
  • Global risk management
  • Automatic containment of suspect items
  • Intelligent root-cause analysis
  • Automated quality assurance
  • Adaptable best practices
  • Enterprise scalability 

Digital Quality Management System in Action

Affine’s Manufacturing Centre Of Excellence has a strong Digital QMS system in place for our manufacturing customers. We have delivered pathbreaking projects in this space. One of the examples is of Surface Defect Detection.

The femcare division of a leading CPG manufacturing company wanted to automate the manual process of detecting defects/faults during the production of the femcare products.

We leveraged Image Annotation to identify different type of defects and mismatches during the production process. We then augmented and boosted the number of images during preprocessing to create exhaustive training set. Multiple models were trained, and checked for accuracy and the best model was selected the based on the accuracy of outputs.

The output was consumed in the form of a web-based tool showcasing extent and type of damage to femcare products along with the respective tagged defect location on the product.

The process, although initially novel and complex, ensured that the final femcare products that the manufacturer produced were always in good condition, and without defects. With streamlined product defect detection processes using AI, the manufacturer achieved a 98.89% success rate in defect detection. 

Before you go

The rapid growth in demand for Digital Quality Management System is being driven by the consumer demand for reliable products. As the world moves more and more towards internet-based sales, QMS is going to become increasingly critical for manufacturers as processing returns and exchanges will become a high cost center.

It is up to manufacturing organizations to invest in QMS well in time, in order to win the trust of their customers in the hyper-competitive era in which they now operate. Affine’s Manufacturing CoE is here to help. Write to us for a demo and we’ll get you started on your journey of Quality 4.0 in order to thrive in Industry 4.0.

A Lapse From Model-Centric to Data-Centric AI

Recently, AI has taken off the ground and has been bringing revolutionary changes in the industry. Its influence has been seen in many aspects of businesses. Many methodologies and algorithms with varying degrees of sophistication have been developed to address a variety of problems and designed to concentrate on the technical aspects of problem-solving. So, the emphasis lies on the coding part of the problem. However, any AI solution built to solve a problem consists of two parts – algorithm and data. The recent Data-centric AI campaign launched by Andrew Ng tries to emphasize that the models have achieved quite a good amount of sophistication and its high time we put more focus on the quality of data. 

What is Data-Centric AI? And How it Helps Data-Driven Businesses?

Many AI algorithms with varying degrees of sophistication have been developed to address a variety of problems (eg. ResNet50, Inception, VGG16, etc. for image classification). Along with that, many methodologies have been developed to further finetune the model, such as regularization, cross-validation, etc. However, these techniques are built to focus on the technical side of problem-solving. So, the emphasis lies on the coding part of the problem.

The core idea of Data-centric AI is that that no amount of fine-tuning can fix bad data. Many of the models presently in use have high levels of complexity and can solve complex challenges. But in case, the data is incorrect or not clear enough, the model will learn as presented. Therefore, Andrew Ng proposes to focus more on data, a new methodology where the model is kept the same and the data is modified iteratively. Precisely, the model can be effectively notified using high-quality data. For this to work well, a proper and deep understanding of data is crucial. This is quite important because what helps to solve a business problem is a solid understanding of the problem itself. This will help us to systematically engineer data, and this can come only when there is clarity on data.

Characterizing the Aspects of High-Quality Data

 For steeper insights we want refined and high-quality data, but how do we define it and what are the aspects for quality maintenance?

Consistency: 

The data should be well defined. There should be clear guidelines and definitions for annotation and labeling. This could require inputs from multiple labelers and subject matter experts.  For example, consider the following object detection problem. In the below figure, two lions are labeled very differently. Both ways are correct. However, the lack of a clear definition (how to label when there is another object in the foreground) led to different annotations. In more complex problems, this can be counterproductive. Therefore, it is essential to have clear guidelines.

Metadata:

Information such as time of creation, source, etc. are also important to determine the kind of data that is to be used. This helps us determine the principles on which the AI solution should be built. Ability to select data precisely can be beneficial while dealing with data drift and updating the model.

High quality of data is essential to develop a clearer understanding of the problem. It orients the decision-making process to be more data-driven rather than technique-driven. Proceeding with this solution requires closer collaboration with the subject matter experts. As a result, the solutions model can be developed in a way that allows Data Scientists to comprehend and manage how the model learns. It will almost certainly lead to the development of better solutions and an improvement in their performance.

In Data-centric AI, the philosophy is aimed at the best utilization of data which requires clear standards set up from the beginning i.e., the data collection. It can motivate businesses to standardize data collection and different processes across their value chains. This will streamline the data management, which in turn will make accessing, monitoring, and analyzing data to build solutions a lot easier.

Data-centric AI brings in bag full of benefits. Since this paradigm requires a deeper understanding of data, it can easily be integrated with the preprocessing of data, which usually takes up a massive amount of time in building a solution. As a result, the resource allocation for training in the Data-centric paradigm could be far less as it doesn’t require a lot of fine-tuning of hyperparameters. These are the benefits of Data-centric AI, to name a few.

Use Case Scenario: A Close Look at Data-Centric AI in Practice 

Now let us look at how we can improve a model in the Data-centric paradigm. We will share here some of the learnings from the data-centric AI competition (to identify Roman numerals from images). For the competition, the participants were asked to alter training and validation datasets in such a way that the model resulting from it will have the best prediction accuracy on a hidden test dataset. For the sake of simplicity, let us imagine that we are building a model that classifies the images into cats and dogs. We need to pay attention to the following details. 

  1. Ensure proper labeling according to well-defined guidelines e.g. Will the image be considered only if the full body of the animal is visible? Or just the head be enough? Does the animal have to be facing the camera?
  2. Ensure that all the images adhere to the same standards e.g., Should they be greyscaled or RGB? Should watermarks be allowed? What if the image contains both dogs and cats?
  3. Make sure that different subclasses are represented adequately both in the training and validation dataset e. g. how many different breeds of the species are present in the data? Are they distributed similarly in both training and validation datasets?
  4. Make sure that when resizing the image for input, relevant details are not lost. In other words, understand how much detail the model needs to learn to make better predictions? i.e., when the image is resized, examine how much information is lost. Does the model use the ears to identify the animal? Are they lost when resizing the image?
  5. Adding different data augmentations and ensuring that it does not lead to noise. For, e.g., horizontal, and vertical shifts may be suitable augmentations but not vertical flip. When horizontal/vertical shift is used, does it lead to the image losing relevant parts?

This brings up another question – how do we choose augmentations? There are a few guidelines on choosing the augmentations that can enrich the model.

  1. Try out individual augmentations and see how the performance varies. If some augmentations improve accuracy, that can help us to build a better model. These augmentations need not be applied to the whole dataset. Sometimes, it can improve the performance vis-a-vis a specific class.
  2. We need to have an idea of up to what degree an augmentation should be performed. Having too little and too much can be detrimental. We need to find the optimal amount of a particular augmentation or a combination of augmentations to get to the best solution.
  3. The augmented examples should be realistic where humans can also do well. If humans cannot succeed in identifying the augmented image, that becomes bad data.

Following these steps, the best way to improve model accuracy is to perform iterations of error analysis. This is one of the central parts of the data-centric paradigm. This process enables us to systematically improve the quality of the dataset. Starting with the original data, we built a baseline model. Using this model, we then examined the predictions on the validation and the test data. This helped us to understand the cases where the model does not predict correctly. Then we formulated different hypotheses that could have led to these failures. To test these hypotheses, we modified the original dataset: 

a) By reconfiguring training and validation datasets 

b) Implementing augmentations 

Each time we cleaned the dataset after implementing augmentations. If any of these steps led to improvement in accuracy, (in other words validated our hypotheses) they were adopted to the dataset. If not, they were rejected. This feedback process was repeated several times, which led to not just better performance but also a better understanding of how the model works. As a result of this iterative error analysis, we could achieve a ~20% increase in accuracy (64.7% to 84.7%) on the hidden dataset.

Get Started From Here…! 

Some of these lessons can be implemented in the case of structured datasets such as tabular data. For example, consider the case of customer data. While labeling, one needs to be clear of what and how data is to be collected. For that, proper planning is necessary. Then we can start with data selection. We should remove the data which does not have enough features as required. Also important is for data to have consistent formatting i.e., same date format, categorical labeling, number of decimal places, etc. Next, to ensure proper training/validation distribution, various considerations are necessary. For example, one should ensure similar distribution of high spenders and low spenders in both training and validation datasets. If not, this can lead to biases in the prediction. Next, we should understand how much detail the model requires to learn. For e.g., if we have a variable, what order of magnitude does it matter to make correct predictions (like in the range of 10s or 100s etc.). These are some examples.

Don’t Stop…, You’re One Step Closer to the Solution! 

Despite all these efforts, these steps may still prove to be insufficient. As far as tabular data is concerned, feature engineering plays a key role. For example, it is possible that there are features that are not yet considered and have missing pieces of information. It is even possible that the current features, once represented in a different way, could also bring new insights to the model. This is especially important given performing the error analysis on the tabular data is quite different. In the case of unstructured data such as images, the Data Scientist can examine even a few hundred images. In the case of tabular data, the number of entries might range even up to tens of thousands.

Questions, Answered!

In the case of tabular data, another question that arises is the issue of outliers. They are not necessarily abnormal data but under-represented behavior. How can the AI solution address these scenarios? There could be two approaches to solving this issue. One way is to use different sampling methods to augment the data. This is comparatively easier. And the second way is to look for more data. This requires persistent monitoring but can yield better examples.

“Data-centric AI should not be just limited to data and algorithm; it should be an organizational attitude.”

Peeping In: All that an Organization Needs! 

Preparing for a paradigm shift to Data-centric AI requires changes in not just the method of problem-solving but also in the organizational attitude. We recommend the following:

  • Need to augment the Data-centric approach (which treats data as asset and applications as ephemeral) with the existing data-driven culture (which orients data in application-centric view)
  • Appoint a chief data officer who shall be responsible to implement good data management practices
  • Develop a data strategy for the collection, storage, and usage of data, at every step of the value chain with future potential also in mind
  • Develop and implement policies for data quality and consistency
  • Flexible and easy-to-use tools for accessing and processing different kinds of data
  • Establish proper channels to communicate the results of research to the decision-makers and ensure their applications are in practice
  • Increase the ease of using Data Science by having reusable, ready-to-use models and APIs and formal processes in place
  • Put people who understand the data at the forefront; hire and cultivate new talents

Unleashing Data-Centric Power- MLOps and DataOps

To accommodate the Data-centric paradigm, we need to consider changes in every step of production. Hence it is essential to implement suitable good practices during MLOps and DataOps. While working with big industrial datasets, keeping track of different Data-centric models (or rather data distributions) can be quite difficult. The data subset we need to retrieve from the data storage to build and/or update models becomes an important aspect of developing the solution. Similarly, Data-centric AI prompts us to take a fresh look at the MLOps practices. It can require a closer inspection of data acquisition and feature engineering. The best practices are yet evolving and being explored.

However, the crucial prerequisite of Data-centric AI is that it demands higher quality data and a deeper understanding of the data over techniques. This points a greater involvement of subject matter experts who can largely simplify the building of the solution. Closer collaboration with the Data Scientists and analysts would be required to structure the problem in a better way, which in turn will help to understand how to approach the data.

Is It Still Point-Blank? Or End to all Data-Centric AI Confusion

There are certain new questions arising as we move to the new paradigm that has not yet been answered. For example, where do we draw the line between Data-centric AI and model-centric AI? At what point can we decide that Data-centric AI has done its best and we need to finetune the model hyperparameters? This can take us back to the drawing board to reformulate the problem or alter the definitions/principles on which the model is built. Nevertheless, at the end of the day, we would like to refrain from claims like Data-centric AI is the panacea to build high-quality models. What Data-centric AI asks for is a better quality of the data. 

It brings the focus from techniques back to the understanding of the problem. And the problem can be understood well only if the data is good enough. No amount of fine-tuning can fix undefined data. But it does not mean that the model-centric approach is outdated. It also has its place. We can use the model’s fine-tuning techniques on top of the Data-centric AI approach to augment and enhance the solution approach.

Copyright © 2024 Affine All Rights Reserved

Manas Agrawal

CEO & Co-Founder

Add Your Heading Text Here

Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged.