Uploaded by sandrkalenikov

Azure

advertisement
1. Helpful Materials
1.1. Microsoft Learn
Microsoft has extensive documentation about Azure Pipelines which could be found here:
https://learn.microsoft.com/en-us/azure/devops/pipelines/?view=azure-devops
The Pipeline basics section in the documentation is recommended to be reviewed.
1.2. Azure Pipelines Basics - YouTube Video
This video explains fundamentals about Azure Pipelines.
https://www.youtube.com/watch?v=XTjV483nIuQ
2. Preliminary Configuration
2.1. Agents and Agent Pools
When your build or deployment runs, the system begins one or more jobs. An agent is computing infrastructure
with installed agent software that runs one job at a time. For example, your job could run on a Microsoft-hosted
Ubuntu agent.
Azure Pipelines uses agent pools to manage the agents that run your build and deployment jobs. An agent pool
is a grouping of one or more build and deployment agents, which are responsible for executing tasks on behalf of
your pipelines.
By default, Azure uses Azure-hosted agent pools, but there are some drawbacks. These free agents are relatively
slow and by default they will not have access to Goflint FTP and SQL due to IP restrictions. Hence, we have
configured custom self-hosted agent pool “GoFlintPool”. To add a new agent to this agent pool one needs to go
to https://dev.azure.com/goflint/Goflint/_settings/agentqueues then select GoFlintPool, switch to Agent's tab
and click New Agent, and then follow Azure instructions displayed. These instructions require installing and
configuring agent on machine and giving it permissions to the pipelines.
GoFlintPool agent is then used in our pipelines as below:
2.1.1. Adding Agent Pool
This section provides instructions on how to add new agent pools, this is already configured and separate agent
pools are created for TEST and PROD environment. GoFlintPoolTestEnv contains an agent that is hosted TEST
web server. Similarly, a new agent on PROD web server should be added under GoFlintPoolProdEnv.
- Open “Project Settings” -> “Pipelines” -> “Agent pools” -> “Add pool”
- Add new pool settings
2.1.2. Generation personal access tokens (PAT)
Agents run on agent host operating system, and they require permissions to Azure DevOps organization, this is
usually done by providing Personal access tokens.
-
In page Personal Access Tokens -> “New Token”
(Save this code token don`t lose it)
2.1.3. Adding new agents to agent pool
This is Agent instructions provided by Azure DevOps
- Here's a more structured and improved version of the Agent instructions, laid out in a step-by-step format:
a) Download the Agent file (.zip) onto your agent machine. Ensure the file is saved to the following path:
"$HOME\Downloads\vsts-agent-win-x64-3.232.1.zip"
b) Run PowerShell as administrator at the [C:/] path and run next commands:
- cd c:/
-
mkdir agent
cd agent
Add-Type -AssemblyName System.IO.Compression.FileSystem ;
[System.IO.Compression.ZipFile]::ExtractToDirectory("$HOME\Downloads\vsts-agent-winx64-3.232.1.zip", "$PWD")
c) Configure the agent run this commands in path C:\agent:
- cd agent
- .\config.cmd
After running this command provide following information:
- Enter server URL > https://dev.azure.com/goflint
- Enter authentication type (press enter for PAT) > press Enter
- Enter personal access token > {{putting your Personal-Access-Token (Look step
2.1.2.)}}
- Enter agent pool (press enter for default) > {{enter agent pool name}}
- Enter agent name (press enter for TEST) > {{enter agent name for example
MachineProdEnv}}
Create new folder `C:\agent-immo` and proceed to the next step after the folder is created:
- Enter work folder (press enter for _work) > C:\agent-immo
- Enter run agent as service? (Y/N) (press enter for N) > y
- Enter enable SERVICE_SID_TYPE_UNRESTRICTED for agent service (Y/N) (press enter
for N) > press Enter
- Enter whether to prevent service starting immediately after configuration is
finished? (Y/N) (press enter for N) > n
If Agent is not running, you can optionally run the agent interactively by executing:
- .\run.cmd
2.1.4. Required Installations on the Agent Machine:
- Visual Studio Community 2022
- TortoiseSVN (it is important to install SVN CLI)
- SQL Server Data-Tier Application Framework (18.3.1)
https://www.microsoft.com/en-us/download/details.aspx?id=100297
2.2. Service connections
In Azure Pipelines, a service connection is a secure way to connect to external services or systems, allowing your
pipeline to interact with resources outside of Azure DevOps. In our case we have configured a service connection
to connect to HelixTeamHub subversion. Service connections are configured here:
https://dev.azure.com/goflint/Goflint/_settings/adminservices
HelixTeamHubSubversion service connection is used in our pipelines as below:
3. Pipelines Fundamentals
3.1. Triggers
In Azure Pipelines, triggers are mechanisms that define when a pipeline should be automatically run. Triggers
help automate the execution of your pipeline based on specific events, schedules, or external conditions. For
example, pipeline could be initiated manually or by some event, such as timer or new commit in the source code
repository (which is called continuous integration or CI).
For now, we have CI disabled and pipelines are executed manually.
More about triggers:
https://learn.microsoft.com/en-us/azure/devops/pipelines/build/triggers?view=azure-devops
3.2. Variables
In Azure Pipelines, variables are used to store values or expressions that can be referenced and used throughout
your pipeline. Variables provide a way to parameterize and customize your pipeline configuration, making it
more flexible and reusable. There are system and user variables, and it is possible to mark variables as secrets
which is useful for sensitive fields such as passwords.
Variables are then referenced in tasks with the following syntax: $(buildConfiguration). They are also caseinsensitive.
More about variables:
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azuredevops&tabs=yaml%2Cbatch
3.3. Pipelines Configuration
Azure Pipelines support YAML and classic pipelines. YAML pipelines contain YAML code which could be stored in
source control, while classic pipelines contain visual interface to manage tasks in the pipeline. We use classic
pipelines as it is simpler to start with.
We use both Pipelines and Releases sections in Azure Pipelines.
The idea behind this is separation between CI (continuous integration) and CD (continuous deployment). CI
pipelines are triggered when code is committed and generate a package ready to be deployed. We call this
package an artifact and it contains zip archives for Front and Back projects, as well as DACPAC file for the
database.
CD pipelines (Releases section) normally start when a new build is ready (generated from CI pipeline). CD
pipelines contain deployment instructions.
For each environment we have configured separate pipelines: two CI pipelines for TEST and PROD and two CD
pipelines for TEST and PROD.
4. CI Configuration
4.1. NuGet Restore Back
Restores NuGet packages for Cadabra Back solution.
4.2. Publish Back
Uses msbuild utility to build and deploy Back project with configuration provided in $(buildConfiguration)
variable.
4.3. Zip Back
Archives all published Back files into a single zip file.
4.4. NuGet Restore Front
Restores NuGet packages for Cadabra Front solution.
4.5. Publish Front
Uses msbuild utility to build and deploy Front project with configuration provided in $(buildConfiguration)
variable.
4.6. Zip Front
Archives all published Front files into a single zip file.
4.7. Publish Artifacts
Publish artifact that contains Front.zip, Back.zip and Database.dacpac into Azure Artifacts repository. This action
DOES NOT deploy the app, it just stores the package for later use in release pipelines.
5. CD Configuration
5.1. Trigger
Release pipeline is configured to be triggered when a new build is ready.
5.2. Deploy Database
Uses SQL Server database deploy task to deploy database changes in SQL Dacpac mode. Database connection
parameters are configured in the task. This action will compare the schema in DACPAC file from the build and
actual schema of the database and will create or update entities such as tables, views and stored procedures. By
default, this will not delete any objects from the database.
5.3. Extract Back Files
Extracts files from Back zip archive.
5.5. Extract Front Files
Identical to Extract Back Files.
5.6. Deploy Back
Copies files from temp folder to actual IIS web app location using Overwrite flag.
5.7. Deploy Front
Identical to Extract Front Files.
Download