Deploy UserManagement Service with MySQL Database ¶
Step-01: Introduction ¶
- We are going to deploy a User Management Microservice which will connect to MySQL Database schema usermgmt during startup.
- Then we can test the following APIs
- Create Users
- List Users
- Delete User
- Health Status
Kubernetes Object | YAML File |
---|---|
Deployment, Environment Variables | 06-UserManagementMicroservice-Deployment.yml |
NodePort Service | 07-UserManagement-Service.yml |
Kubernetes Manifests ¶
#01-storage-class.yml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ebs-sc
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
#02-persistent-volume-claim.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ebs-mysql-pv-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: ebs-sc
resources:
requests:
storage: 4Gi
#03-UserManagement-ConfigMap.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: usermanagement-dbcreation-script
data:
mysql_usermgmt.sql: |-
DROP DATABASE IF EXISTS usermgmt;
CREATE DATABASE usermgmt;
#04-mysql-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.6
env:
- name: MYSQL_ROOT_PASSWORD
value: dbpassword11
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
- name: usermanagement-dbcreation-script
mountPath: /docker-entrypoint-initdb.d #https://hub.docker.com/_/mysql Refer Initializing a fresh instance
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: ebs-mysql-pv-claim
- name: usermanagement-dbcreation-script
configMap:
name: usermanagement-dbcreation-script
#05-mysql-clusterip-service.yml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
selector:
app: mysql
ports:
- port: 3306
clusterIP: None # This means we are going to use Pod IP
#06-UserManagementMicroservice-Deployment-Service.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: usermgmt-microservice
labels:
app: usermgmt-restapp
spec:
replicas: 1
selector:
matchLabels:
app: usermgmt-restapp
template:
metadata:
labels:
app: usermgmt-restapp
spec:
containers:
- name: usermgmt-restapp
image: stacksimplify/kube-usermanagement-microservice:1.0.0
ports:
- containerPort: 8095
env:
- name: DB_HOSTNAME
value: "mysql"
- name: DB_PORT
value: "3306"
- name: DB_NAME
value: "usermgmt"
- name: DB_USERNAME
value: "root"
- name: DB_PASSWORD
value: "dbpassword11"
#07-UserManagement-Service.yml
apiVersion: v1
kind: Service
metadata:
name: usermgmt-restapp-service
labels:
app: usermgmt-restapp
spec:
type: NodePort
selector:
app: usermgmt-restapp
ports:
- port: 8095
targetPort: 8095
nodePort: 31231
Step-02: Create following Kubernetes manifests ¶
Create User Management Microservice Deployment manifest ¶
- Environment Variables
Key Name | Value |
---|---|
DB_HOSTNAME | mysql |
DB_PORT | 3306 |
DB_NAME | usermgmt |
DB_USERNAME | root |
DB_PASSWORD | dbpassword11 |
Create User Management Microservice NodePort Service manifest ¶
- NodePort Service
Step-03: Create UserManagement Service Deployment & Service ¶
# Create Deployment & NodePort Service
kubectl apply -f kube-manifests/
# List Pods
kubectl get pods
# Verify logs of Usermgmt Microservice pod
kubectl logs -f <Pod-Name>
# Verify sc, pvc, pv
kubectl get sc,pvc,pv
User Management Microservice
pod will be restarting multiple times due to unavailability of Database.
- To avoid such situations, we can apply initContainers
concept to our User management Microservice Deployment manifest
.
- We will see that in our next section but for now lets continue to test the application
- Access Application
# List Services
kubectl get svc
# Get Public IP
kubectl get nodes -o wide
# Access Health Status API for User Management Service
http://<EKS-WorkerNode-Public-IP>:31231/usermgmt/health-status
AWS EKS - Elastic Kubernetes Service - Masterclass ¶
Step-04: Test User Management Microservice using Postman ¶
Download Postman client ¶
- https://www.postman.com/downloads/
Import Project to Postman ¶
- Import the postman project
AWS-EKS-Masterclass-Microservices.postman_collection.json
present in folder04-03-UserManagement-MicroService-with-MySQLDB
Create Environment in postman ¶
- Go to Settings -> Click on Add
- Environment Name: UMS-NodePort
- Variable: url
- Initial Value: http://WorkerNode-Public-IP:31231
- Current Value: http://WorkerNode-Public-IP:31231
- Click on Add
Test User Management Services ¶
- Select the environment before calling any API
- Health Status API
- URL:
{{url}}/usermgmt/health-status
- Create User Service
- URL:
{{url}}/usermgmt/user
url
variable will replaced from environment we selected{ "username": "admin1", "email": "dkalyanreddy@gmail.com", "role": "ROLE_ADMIN", "enabled": true, "firstname": "fname1", "lastname": "lname1", "password": "Pass@123" }
- List User Service
-
URL:
{{url}}/usermgmt/users
-
Update User Service
- URL:
{{url}}/usermgmt/user
{ "username": "admin1", "email": "dkalyanreddy@gmail.com", "role": "ROLE_ADMIN", "enabled": true, "firstname": "fname2", "lastname": "lname2", "password": "Pass@123" }
- Delete User Service
- URL:
{{url}}/usermgmt/user/admin1
Step-05: Verify Users in MySQL Database ¶
# Connect to MYSQL Database
kubectl run -it --rm --image=mysql:5.6 --restart=Never mysql-client -- mysql -h mysql -pdbpassword11
# Verify usermgmt schema got created which we provided in ConfigMap
mysql> show schemas;
mysql> use usermgmt;
mysql> show tables;
mysql> select * from users;
Step-06: Clean-Up ¶
- Delete all k8s objects created as part of this section
# Delete All kubectl delete -f kube-manifests/ # List Pods kubectl get pods # Verify sc, pvc, pv kubectl get sc,pvc,pv