[RedHat] EX280v4.10 example

Surote Wongpaiboon
3 min readApr 13, 2023

TLDR;

  1. Create user on openshift for the users below
    htpasswd file name is
    test-htpasswd user for both name and secret-name
  • armstrong with password indionce
  • jobs with password demo123
  • wozniak with password veresa
htpasswd -c -B -b htpasswd armstrong indionce
htpasswd -B -b htpasswd jobs demo123
htpasswd -B -b htpasswd wozniak veresa

oc create secret generic test-htpasswd --from-file=htpasswd -n openshift-config
oc edit oauth

---
spec:
identityProviders:
- htpasswd:
fileData:
name: test-htpasswd
mappingMethod: claim
name: test-htpasswd
type: HTPasswd

---

wait for openshift-authentication restart pods -> test login
You can use oc explain oauth to show the parameter supported. This command is helpful!

2. Create group

  • apollo
  • ops
oc adm groups new apollo
oc adm groups new ops

3. Assign role to user

  • user armstrong cannot create project
  • user wozniak can create project
  • user jobs is cluster admin
oc adm policy add-cluster-role-to-user cluster-admin jobs
oc adm policy add-cluster-role-to-user self-provisioner wozniak
oc edit clusterrolebinding self-provisioners
---
delete all subjects
---

4. Assign user to group

  • Add user armstrong to apollo group
  • Add user wozniak to ops group
oc adm groups add-user apollo armstrong
oc adm groups add-user ops wozniak

5. Assign role to group

  • group apollo is admin project moon
  • group ops is view project moon
oc adm policy add-role-to-group admin apollo -n moon
oc adm policy add-role-to-group view ops -n moon

6. Create quota for project moon name moon-quota

  • maximum cpu is 2 core
  • maximum memory is 2Gi
  • maximum replicationcontroller is 3
  • maximum services is 5
oc create quota moon-quota --hard=limits.cpu=2,limits.memory=2Gi,replicationcontrollers=3,services=5 -n moon

you can use web-console to do it aswell.

7. Create limit range on project moon name moon-limit

  • maximum pod cpu is 1 core minimum is 50m
  • maximum pod memory is 1Gi minimum is 100M
  • maximum container cpu is 500m minimum is 50m
  • maximum container memory is 750M minimum is 25M
  • default request container memory is 150M and cpu is 250m
apiVersion: v1
kind: LimitRange
metadata:
name: moon-limit
namespace: moon
spec:
limits:
- type: "Pod"
min:
cpu: 50m
memory: 100Mi
max:
cpu: 1
memory: 1Gi
- type: "Container"
min:
cpu: 50m
memory: 25Mi
max:
cpu: 500m
memory: 750Mi
defaultRequest:
cpu: 250m
memory: 150Mi

8. Application already deploy in moon-2 project scale application to 5 pod

oc scale --replicas=5 dc/<deploymentconfig name> -n moon-2

9. Application already deploy in moon-3 project automatic scale application

  • min pod is 1
  • max pod is 5
  • cpu baseline 65%
  • configure application cpu request 25m and cpu limit 50m
oc set resources dc/<deploymentconfig name> --limits=cpu=50m --requests=cpu=25m -n moon-3
oc autoscale dc/<deploymentconfig name> --min=1 --max=5 --cpu-percent=65

10. Create self-sign cert and use it with route on the application in moon-4 project

  • create self-sign cert → they provided shell script that created your .crt , .key
  • create route with secure from the crt and key name moon4approute
  • application can produce output no any error
oc create route edge moon4approute --service=<service-name> --cert=out.crt --key=out.key -n moon-4

11. Create secret in project moon-5 name decoding_key

  • data=hello-xyz
oc create secret decoding_key --from-literal=data=hello-xyz -n moon-5

12. Deploy application in moon-5 with env from secret decoding_key

  • Application can produce output with no any error
oc set env --from=secret/decoding_key dc/<deploymentconfig name> -n moon-5

13–16 -> troubleshooting about deploymentconfig/service/route to make application produce correct output

scenario 13-16
13) service use wrong selector to select application.
14) oc logs pod -> permission deny -> fix by assigned scc anyuid to SA
15) their have taint on node -> deploymentconfig didn't set toleration -> set it or delete taint
16) memory request in deploymentconfig set to 80G no any node can handle this.

--

--