0.1. Motivation

Artificial intelligence is influencing the future of almost every sector and every person on the planet. Artificial intelligence has acted as the driving force behind developing technologies such as big data, robotics, and the Internet of Things, and it will continue to do so for the foreseeable future.

This will give you a basic understanding of machine learning (ML) and artificial intelligence (AI) concepts, as well as what Azure has to offer in terms of AI-related services.


Table Of Contents


1. Discover AI

Simply described, artificial intelligence (AI) is the development of software that mimics human actions and skills.



1.1. Machine Learning

What? This is how we "train" a computer model to make predictions and draw conclusions from data, and it is frequently the foundation for an AI system.

Machine learning is a form of artificial intelligence (AI) that Data scientists can use teaches computers to think in a similar way to how humans do: Learning and improving upon past experiences. It works by exploring data and identifying patterns, and involves minimal human intervention.

1.1.1. Machine learning in Microsoft Azure

The Azure Machine Learning service from Microsoft Azure is a cloud-based platform for generating, managing, and publishing machine learning models.

The following features and capabilities are available in Azure Machine Learning:

Automated machine learning: Non-experts can use this functionality to quickly develop a machine learning model from data.
Azure Machine Learning designer: A graphical interface for developing machine learning solutions without writing code.
Data and compute management: Professional data scientists can leverage cloud-based data storage and compute resources to run data experiment code at scale.
Pipelines: Pipelines can be defined by data scientists, software engineers, and IT operations professionals to orchestrate model training, deployment, and administration processes.

1.2. Anomaly Detection

What? It is the capacity to detect mistakes or unexpected behaviour in a system automatically.

In data analysis, anomaly detection is generally understood to be the identification of rare items, events or observations which deviate significantly from the majority of the data and do not conform to a well defined notion of normal behaviour.

In Microsoft Azure, the Anomaly Detector service provides an application programming interface (API) that developers can use to create anomaly detection solutions.

1.3. Computer Vision

What? It is the ability of software to use cameras, video, and photographs to interpret the world visually. Computer vision is a branch of artificial intelligence that deals with visual processing.

1.3.1. Computer vision services in Microsoft Azure

To assist you in developing computer vision solutions, Microsoft Azure offers the following cognitive services:

Computer Vision: You can use this service to extract descriptions, tags, objects, and text from photos and videos.
Custom Vision: Using your own photographs, train custom image classification and object recognition models with this service.
Face: You can use the Face service to create face detection and identification applications.
Form Recognizer: This service can be used to extract data from scanned forms and invoices.

1.4. Natural Language Processing (NLP)

What? It is the area of AI that deals with the ability of a computer to understand and respond to written or spoken language.

NLP enables you to create software that can:

  • Analyze and understand content from a variety of sources, including documents, emails, and other sources.
  • Synthesize speech replies and interpret spoken language.
  • Translate spoken or written sentences between languages automatically.
  • Interpret commands and decide on the best course of action.

1.4.1. Natural language processing in Microsoft Azure

You can utilize the following cognitive services in Microsoft Azure to construct natural language processing solutions:

Language: Access features for reading and analyzing text, training language models to understand spoken or text-based commands, and developing intelligent applications using this service.
Translator: This service allows you to translate text across over 60 languages.
Speech: This service can be used to recognize and synthesize speech as well as translate spoken languages.
Azure Bot: Conversational AI is the ability of a software "agent" to participate in a conversation, and this service provides a platform for it. Developers can use the Bot Framework to build a bot and manage it with Azure Bot Service, integrating back-end services like Language and connecting to online chat, email, Microsoft Teams, and other channels.

1.5. Knowledge Mining

What? It is the process of extracting information from enormous amounts of typically unstructured data in order to generate a searchable knowledge store.
Knowledge mining is an emerging discipline in artificial intelligence (AI) that uses a combination of intelligent services to quickly learn from vast amounts of information.

1.5.1. Azure Cognitive Search

It is a mining solution for Microsoft Azure, to execute knowledge mining on documents, Azure Cognitive Search may make use of Azure Cognitive Services' built-in AI capabilities, such as image processing, content extraction, and natural language processing. The AI capabilities of the solution enable it to index previously unsearchable texts as well as swiftly extract and reveal insights from enormous amounts of data.

1.6. AI Risks

Artificial intelligence is a strong instrument that has the potential to assist the entire globe. It must, however, be used responsibly, just like any other too.

Some of the probable obstacles and hazards that an AI application developer can face:

1.6.1. Fairness and Bias Concerns

One perceived advantage of AI is that algorithms can make fair decisions, unencumbered by human bias. But an AI system’s decisions are only as good as the data it’s trained on. If a particular population is underrepresented in the data used to train a machine learning model, the model’s output could be unfairly discriminatory towards that population.

1.6.2. Accidents and Physical Safety Considerations

If left unchecked, it’s possible for AI’s imperfections to cause physical harm. An innocent person is convicted of a crime based on evidence from facial recognition – who's responsible?
Let’s look at self-driving cars, an AI application that is beginning to take hold in today’s automobile market. If a self-driving car malfunctions and goes off-course, that poses an immediate risk to the passenger, other drivers on the road, and pedestrians.

1.6.3. Malicious Use of AI:

AI researchers have managed to do a lot of good with the technology’s applications. But in the wrong hands, AI systems can be used for malicious or even dangerous purposes. In a 2018 report titled “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation,” experts and researchers found that malicious use of AI technology could threaten our digital, physical, and political security.

1.7. Responsible AI

What? It is the practice of designing, developing, and deploying AI with good intention to empower employees and businesses, and fairly impact customers and society—allowing companies to engender trust and scale AI with confidence.

At Microsoft, AI software development is guided by six principles aimed at ensuring that AI applications give great answers to complex challenges while avoiding unanticipated negative repercussions.

1.7.1. Fairness

All persons should be treated equally by AI systems. Assume you're developing a machine learning model to support a bank's loan approval application. The model should forecast whether or not the loan should be authorized without taking into account any prejudice based on gender, race, or other variables that might provide specific groups of applicants an unfair benefit or disadvantage.

1.7.2. Reliability And Safety

AI systems should be dependable and safe to use. Consider an AI-based software system for a self-driving car, or a machine learning model that detects patient symptoms and prescribes medications. Unreliability in these kind of systems can put people's lives at jeopardy.

1.7.3. Privacy And Security

AI systems should be safe and secure, and they should respect people's privacy. Machine learning models, which underpin AI systems, rely on vast amounts of data, some of which may contain personal information that must be kept secret. Even after the models have been trained and the system has been put into operation, it continues to use new data to make predictions or perform actions that may raise privacy or security problems.

1.7.4. Inclusiveness

Everyone should be empowered and engaged by AI technologies. Regardless of physical ability, gender, sexual orientation, race, or other factors, AI should benefit all members of society.

1.7.5. Transparency

Artificial intelligence (AI) should empower and involve everyone. AI should help all members of society, regardless of physical ability, gender, sexual orientation, color, or other variables.

1.7.6. Accountability

AI systems should be held accountable. Designers and developers of AI-based solutions should adhere to a set of governance and organizational principles to ensure that the solution adheres to clearly defined ethical and legal norms.


2. Machine Learning Tools

A lot of work goes into training and implementing an effective machine learning model, much of it time-consuming and resource-intensive. Azure Machine Learning is a cloud-based service that helps ease some of the procedures involved in preparing data, training a model, and deploying a predictive service by reducing the time it takes to do so. The remainder of this section will focus on Azure Machine Learning, including its automated machine learning capabilities.

2.1. Automated Machine Learning

2.1.1. Azure Machine Learning workspace

You create a workspace in your Azure subscription when you use Azure Machine Learning. You may then utilize this workspace to manage your machine learning workloads' data, compute resources, code, models, and other artifacts.

2.1.2. Compute Resource Types:

Compute Instances: Data scientists can utilize these workstations to work with data and models.
Compute Clusters: Scalable virtual machine clusters for on-demand experiment code processing.
Inference Clusters: Predictive service deployment targets based on your trained models.
Attached Compute: Provides access to current Azure compute resources like Virtual Machines and Azure Databricks clusters.

2.1.3. A Dataset

In Azure Machine Learning, data for model training and other operations is usually encapsulated in an object called a dataset.
Datasets identify data within different data stores, such as tables, files, folders, and documents. For example, an Azure Blob dataset specifies the blob container and folder in Blob Storage from which the activity should read the data.

Machine Learning Experiment: In Azure Machine Learning, operations that you run are called experiments

What is a pipeline?
A machine learning pipeline is the end-to-end construct that orchestrates the flow of data into, and output from, a machine learning model (or set of multiple models). It includes raw data input, features, outputs, the machine learning model and model parameters, and prediction outputs.
What is inference pipeline?
An inference pipeline in Azure Machine Learning uses the trained model to assign incoming input data to pre-defined labels. This serves as a template for a web service that you may publish and use by other services and apps.

2.1.3.1. Deployement

A service can be deployed as an Azure Container Instances (ACI) or an Azure Kubernetes Service (AKS) cluster in Azure Machine Learning. An AKS deployment is recommended for production applications, for which you must construct an inference cluster computing target. You'll utilize an ACI service in this exercise, which is a good deployment target for testing and doesn't require you to set up an inference cluster.

2.2. Machine Learning Designer

2.2.1. Regression Model

The supervised machine learning approach of regression is used to predict numeric values.

To use the Azure Machine Learning designer, you create a pipeline that you use to train a machine learning model. This pipeline starts with the dataset from which you want to train the model.
Following the creation and execution of a pipeline to train the model, you'll need a second pipeline that applies the same data transformations to new data and then utilizes the trained model to infer (or predict) label values based on its features. This will serve as the foundation for a predictive service that you can make available to applications.

2.2.2. Classification Model

A supervised machine learning technique for predicting categories or classes is classification.
Classification is a type of machine learning that predicts which category (or class) an object belongs to. A health clinic, for example, might utilize a patient's attributes (such as age, weight, blood pressure, and so on) to determine whether the patient is diabetic. The attributes in this case are the patient's characteristics, and the label is a categorization of 0 or 1, indicating whether the patient is non-diabetic or diabetic.

2.2.3. Clustering Model

Clustering is a type of unsupervised machine learning in which related items are grouped together based on their characteristics. A researcher might, for example, gather measurements of penguins and group them together based on proportional similarities.
You can use Microsoft Azure Machine Learning designer to create clustering models by using a drag and drop visual interface, without needing to write any code.


3. Computer Vision

Computer vision is a subfield of AI in which software systems are designed to perceive the world visually via cameras, images, and video. AI engineers and data scientists can solve a wide range of computer vision problems with a combination of custom machine learning models and platform-as-a-service (PaaS) solutions, including many cognitive services in Microsoft Azure.This section will go over some of what you can do with it.

You must build a resource for the Computer Vision service in your Azure subscription in order to use it. You have the option of using one of the following resource types:

Computer Vision: A dedicated resource for the service's Computer Vision. If you don't plan to use any other cognitive services, or if you wish to track consumption and expenditures for your Computer Vision resource independently, choose this resource category.
Cognitive Services: A repository for generic cognitive services that covers Computer Vision as well as a variety of other cognitive services including Text Analytics, Translator Text, and others. If you want to use many cognitive services and want to make administration and development easier, this resource category is for you.

Regardless of the sort of resource you create, it will provide two bits of information that you will need to use it:

  • A secret key that is used to verify the authenticity of client apps.
  • The HTTP address at which your resource can be accessed is provided by this endpoint.

3.1. Analyze Images

identifying loads of things in a photo and evaluating it:

Describing an image: Computer Vision can examine a picture, evaluate the objects discovered, and output a human-readable phrase or sentence that describes what was found in the image.

Tagging visual features: Computer Vision's picture descriptions are based on a database of thousands of identifiable items that can be used to suggest tags for the image.

Detecting objects: The service feature is similar to tagging in that it can identify common things; however, instead of just tagging or providing tags for the detected objects, this service can also supply bounding box coordinates.
Bonus: A bounding box (usually shortened to bbox) is an area defined by two longitudes and two latitudes, where: Latitude is a decimal number between -90.0 and 90.0. Longitude is a decimal number between -180.0 and 180.0

Detecting brands: This function allows you to recognize commercial brands. Thousands of widely recognizable logos from commercial brands of products are already in the service's database.

Detecting faces: The Computer Vision service can detect and analyze human faces in images, including determining age and generating a bounding box rectangle for the face's location (s).

Categorizing an image: Computer Vision can classify images based on what they contain. With a "current" limited set of categories, the service uses a parent/child hierarchy.

Detecting domain-specific content:

  • Celebrities: The service includes a model that has been trained to identify thousands of well-known celebrities from the worlds of sports, entertainment, and business.
  • Landmarks The service can identify famous landmarks, such as the Taj Mahal and the Statue of Liberty.

Optical character recognition: The Computer Vision service can detect printed and handwritten text in photos using optical character recognition (OCR) capabilities.

3.2. Classify Images

3.2.1. Classification

is a supervised form of learning, where you teach the computer to do something with data that's already labeled by humans. This training set includes a fixed amount of labels or categories for the computer to learn from.

3.2.2. Image classification

is the process of categorizing and labeling groups of pixels or vectors within an image based on specific rules. The categorization law can be devised using one or more spectral or textural characteristics. Two general methods of classification are 'supervised' and 'unsupervised

3.2.3. Custom Vision Service

What is Neural Network: A convolutional neural network (CNN) is a specific type of artificial neural network that uses perceptrons, a machine learning unit algorithm, for supervised learning, to analyze data. CNNs apply to image processing, natural language processing and other kinds of cognitive tasks.

The majority of recent image classification solutions rely on deep learning approaches that use convolutional neural networks (CNNs) to find patterns in pixels that correlate to different classes. Training an effective CNN is a difficult process that necessitates a high level of data science and machine learning knowledge.

The Custom Vision cognitive service in Microsoft Azure has integrated common methodologies for training image classification models, making it simple to train a model and publish it as a software service with little knowledge of deep learning techniques. You can train image classification models with the Custom Vision cognitive service and then deploy them as services for apps to use.

3.3. Read Text

The Computer Vision service, which also has image analysis skills, is in charge of extracting text from images.

A lot of the time, an image has words in it. It might be typed or handwritten text. Images of road signs, scanned documents in image formats such as JPEG or PNG file formats, or even just a photograph of a white board used during a conference are all popular examples.

The OCR API and the Read API are two application programming interfaces (APIs) provided by the Computer Vision service for reading text in images.

3.3.0.1. OCR API

An Optical Character Recognition(OCR) API helps you transcribe text from image files and PDF documents and receive the extracted data in a JSON/CSV/Excel or other file formats.

3.3.0.2. The Read API

The Read API uses an updated recognition model, takes an image or PDF document as the input and extracts text asynchronously.

3.4. Detect objects in images

Using the Custom Vision service

3.4.1. Object Detection

Object detection is a computer vision technique that allows us to identify and locate objects in an image or video. The main purpose of object detection is to identify and locate one or more effective targets from still image or video data

3.4.1.1. Returned Information:

  • Each object in the image is assigned a class.
  • The object classification's likelihood score.
  • For each object, the coordinates of a bounding box.

3.5. Detect And analyz faces

Face detection and analysis is a branch of artificial intelligence (AI) in which algorithms are used to find and evaluate human faces in photos and videos.

Using the Face service

3.5.1. Face detection

Face detection is a computer technology being used in a variety of applications that identifies human faces in digital images.

3.5.2. Facial analysis

An intersection between computer vision, Machine Learning and Image processing fields that aims to extract any kind of information from human face except his/her identity such as emotion, engagement, vitals measurement, demographics information age and gender to name a few

3.5.3. Facial recognition

Simply, to train a machine learning model to recognize known persons based on their facial traits.
It entails training a model with many photos of each person you want to recognize so that it can recognise those individuals in new images that it hasn't been trained on.

3.6. Analyze Receipts

3.6.1. Using the Form Recognizer service

Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine-learning models to extract key-value pairs, text, and tables from your documents.

You may use Azure's Form Recognizer to automate the processing of data in documents like forms, invoices, and receipts. It blends cutting-edge optical character recognition (OCR) with prediction algorithms that can decipher form data in the following ways:

  • Field names and values are matched.
  • Tables of data processing
  • Identifying certain field kinds, such as dates, phone numbers, addresses, totals, and so on.

You must either build a Form Recognizer resource or a Cognitive Services resource in your Azure subscription to use the Form Recognizer. The Form Recognizer service is accessible through both resource types.

After the resource has been built, you can construct client applications that link submit forms for analysis using the resource's key and endpoint.

3.6.1.1. Pre-built receipt model

  • Time of transaction
  • Date of transaction
  • Merchant information
  • Taxes paid
  • Receipt totals
  • Other pertinent information that may be present on the receipt
  • All text on the receipt is recognized and returned as well

4. Natural Language Processing

Natural language processing enables applications to see, hear, communicate with, and comprehend users. Microsoft Azure makes it simple to create apps that support natural language by providing text analytics, translation, and language understanding services.

4.1. Text Analytics

Text analytics is a process in which a computer-based artificial intelligence (AI) program assesses these similar qualities in text to produce specialized insights. To gain insights, most people will rely on their own experiences and knowledge. In order for a computer to complete the work, it must be given similar knowledge.

4.1.1. Azure Language Service

The Language service, which is part of Azure Cognitive Services, can do advanced natural language processing on raw text.

You must supply an appropriate resource in your Azure subscription to use the Language service in an application. You have the option of providing one of the following resource types:

  • A Language resource - pick this resource type if you only want to employ natural language processing services, or if you want to manage access and billing for the resource independently from other services.
  • A Cognitive Services resource - To use the Language service in an application, you must provision an appropriate resource in your Azure subscription. You can choose to provision either of the following types of resource:

4.1.1.1. Language detection

Language detection is usually used to identify the language of business texts like emails and chats.
This technique identifies the language of a text and the parts of that text in which the language changes, all the way down to the word level. It is primarily used because these business texts (chats, emails, etc.)

4.1.1.2. Ambiguous or mixed language content

There could be text that is confusing(Ambiguous) or has mixed language information. These conditions can be difficult for the service. A document with little or no text, or solely punctuation, would be an example of ambiguous content. For instance, analyzing the text ":-)" yields a value of unknown for the language name and language identifier, as well as a score of NaN.

4.1.1.3. Sentiment analysis

Sentiment analysis, also referred to as opinion mining, is an approach to natural language processing (NLP) that identifies the emotional tone behind a body of text. This is a popular way for organizations to determine and categorize opinions about a product, service, or idea.

4.1.1.4. Indeterminate sentiment

A score of 0.5 indicates that the text's sentiment is Indeterminate, which could imply that the text lacks adequate context to discern a feeling or has insufficient wording. A list of words in a sentence with no structure, for example, could result in an undetermined score. Another situation when a score of 0.5 can be appropriate is when the incorrect language code was used. To tell the service which language the text is in, a language code (such as "en" for English or "fr" for French) is used. If you provide text in French but tell the service that the language code is en, the service will give you a score of exactly 0.5.

4.1.1.5. Key phrase extraction

Key phrase extraction is one of the features offered by Azure Cognitive Service for Language, a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. Use key phrase extraction to quickly identify the main concepts in text. For example, in the text "The food was delicious and the staff were wonderful.", key phrase extraction will return the main topics: "food" and "wonderful staff".

4.1.1.6. Entity recognition

You can send unstructured text to the Language service, and it will return a list of entities that it recognizes. The service might also provide web connections to additional information about the entity. An entity is essentially an item of a specific type or category, as well as, in some situations, a subtype, as illustrated in the table below.

5. Recognize and synthesize speech

Speech recognition - the ability to detect and interpret spoken input.
Speech synthesis - the ability to generate spoken output.

5.0.1. Speech recognition

Speech recognition is the process of translating a spoken word into data that can be processed, most commonly by transcribing it into a text representation. A recorded voice in an audio file or live audio from a microphone can be used to deliver the spoken words. Speech patterns in the audio are examined to find recognized patterns that can be transferred to words. To achieve this, the program often use a variety of models, including:

  • An acoustic model for converting audio into phonemes (representations of specific sounds).
  • A language model that maps phonemes to words, usually with the use of a statistical algorithm that predicts the most likely word sequence based on the phonemes.

The identified words are usually turned to text, which can be used for a variety of reasons, including:

  • Providing closed captions for content that has been recorded or is now being streamed
  • Making a transcript of a phone conversation or a meeting
  • Dictation of notes by a computer
  • Identifying the user's intended input for subsequent processing

5.0.2. Speech synthesis

In many ways, speech synthesis is the polar opposite of speech recognition. It is concerned with the vocalization of data, which is commonly accomplished by transforming text to speech. The following information is often required by a speech synthesis solution:

  • The text that will be read aloud.
  • The voice that will be utilized to enunciate the words.

The algorithm tokenizes the text to split it down into individual words and assigns phonetic sounds to each word to synthesis speech. The phonetic transcription is then broken down into prosodic units (phrases, clauses, or sentences) to create phonemes, which are subsequently transferred to audio format. These phonemes are then synthesized into audio by using a voice to identify factors like pitch and timbre, as well as generating an audio wave form that may be output to speaker or written to a file.

The output of speech synthesis can be used for a variety of things, including:

  • Creating spoken responses in response to user input.
  • Voice menus for telephone systems are created.
  • In hands-free situations, read email or text messages aloud.
  • Public announcements are made at places like train stations and airports.

6. Translation

6.0.0.1. what is a Literal Translation

Literal translation is a term used to describe a word-for-word translation that sticks too close to the source text. It is a translation that sounds unnatural in the target language, is usually hard to read and does not convey the meaning of the original text.

6.0.0.2. what is a Semantic Translation

Semantic Information: The concept of semantic information refers to information which is in some sense meaningful for a system, rather than merely correlational.

Semantic translation is the process of using semantic information to aid in the translation of data in one representation or data model to another representation or data model

6.0.1. Text and speech translation

Text translation can be used to translate documents from one language to another, as well as to translate emails from foreign governments and even to translate web pages on the Internet. Many times, you'll see a Translate option for social media posts, or the Bing, Google search engine.

Translation is supported by
Microsoft Azure's cognitive services. You can use the following services in particular:

6.1. The Translator service

It supports text-to-text translation
This is useful to remove the complexity of building instant translation into your apps and solutions with a single REST API call. Accurately detect the language of your source text, look up alternative translations with the bilingual dictionary, or convert text from one script to another.

6.2. The Speech service

Enables speech-to-text and speech-to-speech translation

The Speech service includes the following application programming interfaces (APIs):

  • Speech-to-text - used to transcribe speech from an audio source to text format.
  • Text-to-speech - used to generate spoken audio from a text source.
  • Speech Translation - used to translate speech in one language to text or speech in another.

7. Conversational Language Understanding

As artificial intelligence (AI) becomes more advanced, the type of conversational contact with apps and digital assistants is becoming more prevalent, and in certain cases, can result in human-like conversations with AI agents. Customer service applications, reservation systems, and home automation are examples of common scenarios for this type of technology.
Computers must not only be able to accept language as input (either in text or audio format) but also comprehend the semantic meaning of the input - in other words, grasp what is being said - in order to achieve the goal of the imitation game.

Conversational language understanding is offered by Microsoft Azure's Language Service. You must consider three essential ideas while working with Conversational Language Understanding.

7.0.0.1. Utterances

In spoken language analysis, an utterance is the smallest unit of speech. It is a continuous piece of speech beginning and ending with a clear pause. In the case of oral languages, it is generally, but not always, bounded by silence.

"keep the hustle"
"keep the work up"

7.0.0.2. Entities

An entity is an item to which an utterance refers. For example, hustle and work in the following utterances:

"keep the hustle"
"keep the work up"

7.0.0.3. Intents

The purpose, or goal, expressed in a user's utterance is represented by an intent. For example, the aim of both of the previously examined utterances is to switch on a device; hence, you might declare a TurnOn intent for these utterances in your Conversational Language Understanding application.

and NOW
After you've defined the entities and intents in your Conversational Language Understanding application with sample utterances, you can train a language model to predict intents and entities from user input - even if it doesn't perfectly match the sample utterances. You may then retrieve predictions and respond properly using the model from a client application.

8. Build A Bot

Bots are a popular technique to deliver assistance across numerous channels. This lesson explains how to develop a bot that answers user inquiries using a knowledge base and Azure Bot Service.

8.0.1. Conversational AI

Conversational AI combines natural language processing (NLP) with traditional software like chatbots, voice assistants, or an interactive voice recognition system to help customers through either a spoken or typed interface. Conversations typically take the form of messages exchanged in turns; and one of the most common kinds of conversational exchange is a question followed by an answer. This pattern forms the basis for many user support bots, and can often be based on existing FAQ documentation.

Many organizations make assistance information and answers to frequently asked questions (FAQs) available via a web browser or dedicated app. Because of the complexity of the systems and services they provide, finding answers to specific inquiries is difficult. These firms frequently find that their support staff is swamped with requests for assistance via phone calls, email, text messaging, social media, and other methods. Thus, building a bot is a great idea.

8.0.2. A bot using Microsoft Azure

On Microsoft Azure, you may quickly build a user assistance bot utilizing a combination of two main services

8.0.2.1. Language service

It is available. A custom question answering feature in the Language service allows you to build a knowledge base of question and answer pairs that can be queried using natural language input.

8.0.2.2. The Azure Bot Service

It is a service provided by Microsoft. This service provides an Azure-based platform for creating, publishing, and maintaining bots.


9. Decision-Making Assistance Anomaly Detection

brief: Anomaly detection is the identification of rare events, items, or observations which are suspicious because they differ significantly from standard behaviors or patterns. In AI, Anomaly detection is a technique used to determine whether values in a series are within expected parameters.
Anomalies are values that are outside the expected values or range of values.

9.0.1. Anomaly Detector Service

Azure's Anomaly Detector service is a component of Azure Cognitive Services' Decision Services category. It is a cloud-based tool that allows you to monitor and discover anomalies in time series data. It does not necessitate any prior knowledge of machine learning. You can easily integrate Anomaly Detector into your projects using the REST API. A "one parameter" technique is employed by the service. The most important parameter to change is "Sensitivity," which ranges from 1 to 99 and is used to tailor the outcome to fit the scenario. Anomalies can be detected in both historical and real-time data, such as streaming input from IoT devices, sensors, and other streaming input sources.

9.0.1.1. Functionality

Anomalies that exist outside the span of a boundary are detected by the Anomaly Detector service. A sensitivity value is used to set the boundary. ExpectedValue, upperMargin, and lowerMargin are used to establish the higher and lower boundaries for anomaly detection by default. These three variables are used to calculate the upper and lower bounds. An anomaly will be detected if a value exceeds either of the two boundaries. The upper and lower margins can be adjusted by applying a marginScale to them, as shown in the following formula.

upperBoundary = expectedValue + (100 - marginScale) * upperMargin

9.0.1.2. Data forma

tThe JSON format is accepted by the Anomaly Detector service. Any numerical data that you have collected over time can be used. The granularity, a timestamp, and a value recorded for that timestamp are all important parts of the data being conveyed. This code sample shows an example of a JSON object that you could send to the API. The granularity is set to hourly, and it is used to represent temperatures in degrees Celsius recorded at the timestamps specified.

{
"granularity": "hourly",
"series": [
{
"timestamp": "2021-03-01T01:00:00Z",
"value": -10.56
},
{
"timestamp": "2021-03-02T02:00:00Z",
"value": -8.30
},
{
"timestamp": "2021-03-02T03:00:00Z",
"value": -10.30
},
{
"timestamp": "2021-03-02T04:00:00Z",
"value": 5.95
},
]
}

9.0.2. Use-Case

9.0.2.1. Batch detection

The algorithm is applied to a whole data series at once in batch detection. The examination of a data set as a batch is central to the concept of time series data. Make use of your time series to look for any irregularities in your data. This process creates a model from all of your time series data, analyzing each point using the same model.

When your data contains the following, batch detection is the best option.

  • Time series data with a flat trend and occasional spikes or dips
    Data from a seasonal time series with a few outliers

  • Seasonality is a trend in your data that repeats itself at regular intervals. Hourly, daily, or monthly patterns are examples. Detection delay can be reduced by using seasonal data and specifying a period for that pattern.

Anomaly Detector develops a single statistical model based on the whole data set that you send to the service when you use the batch detection mode. Each data point in the data set is assessed using this model, and abnormalities are discovered.

9.0.2.2. Real-time detection

Brief: Real-time object detection is the task of doing object detection in real-time with fast inference while maintaining a base level of accuracy.

Real-time detection compares previously visible data points to the most recent data point to determine if the most recent one is an anomaly. Using the data points you give, this operation creates a model and decides whether the target (current) point is an anomaly. You may keep track of your data while it's being created by calling the service with each new data point you make.


10. knowledge Mining

Brief: Knowledge mining is an emerging discipline in artificial intelligence (AI) that uses a combination of modular services to quickly learn from vast amounts of information.

It's never been easier to find information on the internet. Finding information from papers that aren't in a search index, on the other hand, remains a challenge. People, for example, deal with unstructured, typed, image-based, or hand-written materials on a daily basis. People frequently have to pore through these documents by hand to extract and record their findings in order to save the information. We now have technologies that can automate the extraction of data.

The term "knowledge mining" refers to methods that entail extracting information from enormous amounts of typically unstructured data. Azure Cognitive Search, a private, enterprise-level search solution with index-building capabilities, is one of these knowledge mining solutions.
The indexes can then be utilized for internal purposes solely, or they can be used to provide searchable material on publicly accessible internet assets.

10.1. Azure Cognitive Search

Azure Cognitive Search (formerly known as "Azure Search") is a cloud search service that gives developers infrastructure, APIs, and tools that extract data from a variety of structured, semi-structured, and non-structured documents for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.

10.1.1. Indexes

A container containing searchable documents can be thought of as an Azure Cognitive Search index. An index can be thought of as a table, with each row representing a document. Tables have columns, which can be thought of as the equivalent of fields in a document. Data types exist in columns, just as they do in fields on documents.

10.1.1.1. Index schema

An index is a persistent collection of JSON documents and other content used to allow search capability in Azure Cognitive Search. An index's documents are similar to rows in a table; each document is a single unit of searchable data in the index.

{
"name": "index",
"fields": [
{
"name": "content", "type": "Edm.String", "analyzer": "standard.lucene", "fields": []
}
{
"name": "keyphrases", "type": "Collection(Edm.String)", "analyzer": "standard.lucene", "fields": []
},
{
"name": "imageTags", "type": "Collection(Edm.String)", "analyzer": "standard.lucene", "fields": []
},
]
}

10.1.1.2. Index Attributes

Indexing an attribute can improve the performance of queries for that attribute.
Azure Cognitive Search needs to know how you would like to search and display the fields in the documents. You specify that by assigning attributes, or behaviors, to these fields.

10.1.2. Knowledge Store

A knowledge store is a place where enriched content can be stored indefinitely. A knowledge store's goal is to store the data generated by AI enrichment in a container. You might want to save the results of an AI skill set that generates captions from images, for example.


Understanding what is artificial intelligence is not enough until you know how it comes into play. So, if that interests you, I strongly advise you to learn more about it!

“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.” —Eliezer Yudkowsky