Kloia Blog

08 Mar 2019
by ogulcan
Comments are closed

WHY KARATE FRAMEWORK IS AWESOME?

                                                                                            

     Behaviour-Driven Development (BDD) is an software development approach that provides the communication between technical and non-technical teams. It is evolved from Test Driven Development and similar with TDD, but BDD mostly deals with the user behavior of the system.

     Karate Framework is an open source tool for API test automation. Tests are written in Gherkin format which is also using for BDD. So even non-developers can be able create api tests for services. Let’s check some of main features of the framework and how API test development became easier with it..

  • API Request Methods

As you know, there is different API request methods such as ‘get’, ‘put’, ‘update’ etc. Calling an endpoint using with API request method is very simple by Karate as below. Just type method name which you need after ‘method’ keyword.

When method GET

...

When method POST

...

When method DELETE

...

When method PATCH

...

  • Assertion and Validation

Assertions and Validations are the key point of the test processes to check responses or expected results. You don’t make sure that the application is working correctly without the assertions or validations of manual or automated test cases.

At this point, Karate emerges with variety pretty solution to verify test cases.

=> Status Code Check

Then status 200

=> Response field check

And match product.data[0].name == 'We are KLOIANs'

=> Is response contain check

And match response contains {price: 2.5}
  • Reading Files

Reading files is one of very common need in test automation project to re-use or manipulate Payload data, scripts or functions. Especially I’ve faced a need to read and modify DTOs many times in the projects. When focused on reading files, Karate supports to read .json, .xml, .yaml, .js, .csv or .txt data file type.

* def json = read('some.json')

* set json.item[] = 'string'
And request read('product-update-data.json')
  • Calling Feature File or Scenario

Writing same test cases and codes over and over again is a common problem for development . To avoid from writing repetitive scenarios or steps, use “call” method. “call” method can be used for whole feature file or a specific scenario by its tag.

Given call read('call.feature')
Given call read('classpath:call.feature@kloian')

     Nowadays, software testing is becoming more important and resources allocated for testing are increased. With the widespread of microservice architecture, api tests have started to have an important place in software testing.

     Manual tests for api testing are inadequate today. Therefore, we should support the process with automation tests and increase the quality and reliability of the application. At this point the Karate Framework is a simple and very good solution that allows us to easily write automation tests without any extra development cost.

13 Feb 2019
by dsezen
Comments are closed

We are hiring! Inbound Sales Engineer

Kloia is a Cloud and DevOps Solution Provider with the following unique solutions:

Our client base is growing steadily, and we’re looking for a competent inbound sales engineer with exceptional communication skills and technical background to be an amazing first point of contact for the inbound sales process!

This is a Remote role! You can send your applications to career at kloia.com

As a Kloia Inbound Sales Engineer, you will:

  • Answer questions regarding our services, features and capabilities from all leads that are generated by our inbound marketing efforts. This may include leads from our website, contact form, chat, social media or any other inbound contact method.
  • Discuss with sales leads their requirements, needs, and questions via email/ticket and voice communication (Zoom, Skype, Phone, etc.).
  • Provide inside sales assistance to existing clients and Kloia team by custom pricing, generating custom proposals
  • Respond to leads, taking into account their skill level, expectations, and needs.
  • Produce satisfied customers by helping Kloia Team and customers, understanding their needs, demonstrating technical expertise to build trust, and guiding the them to the right service. This role does not mean to memorize keywords or to read from scripts.
  • Monitor trends and provide feedback to the appropriate internal team.
  • Create, standardize, and document sales processes and workflows.
  • Develop relevant skills with research and education.
  • Keep up to date with the trends in Cloud, DevOps and Microservices in the community to understand upcoming changes.

Requirements:

  • Fluent in written and verbal English. We need that you are an exceptional writer.
  • Comfortable jumping on multiple sales calls each day and flexibility to comply with different timezones.
  • Preferably prior experience in a similar role.
  • Must understand the concepts related with Cloud, DevOps and Microservices.
  • Technical experience using technologies such as AWS, GCP, Terraform, Jenkins, Docker, Kubernetes is required. You do not need to be a developer; however, you do need to understand and feel comfortable discussing these topics via both written and verbal conversations.
  • Willing to learn and understand new technical concepts.

Things To Remember When Applying:

We expect you to include a cover letter that points out how you are complying with the requirements in this listing. We would like to hear from your words, how your experience and expertise will provide value to this role. Lastly, let us know what you think about DevOps as a Service in a few words.

Benefits

  • This is a fully-remote role! You can work from home or anywhere as long as you have reliable internet and a quiet environment for sales calls as needed. You can also work from one of our office at London or Istanbul!
  • This position offers flexibility, responsibility and opportunity for growth for the right candidate.

You can send your applications to career at kloia.com

 

06 Feb 2019
by dsezen
Comments are closed

kloia Dojo Model: Sharon Bowman 4C

Although we are not a training company, considering that we are following the frontline practices and principles, we were having incoming training requests. As kloia team, we iterated to create an innovative way of delivering the trainings.

“If kloia does it, it is done in kloian-way”

Initially we provided a more “classical” way of delivering the trainings, in didactic, with one-way of communication, from which neither of the parties were fully satisfied. There should be other remaining part of the “Satisfaction puzzle”. As we are from scratch investigating the training domain, we have to find those missing parts!

As a one-way forward, we converted those trainings to a workshop manner where the attendees are given scenarios and steps to achieve to the expected targets. This worked better as the attendees were kinda more engaging to the game. We can easily notice that, as the attendees were not fully committed to the sessions, copying and pasting the introductions provided. But this usually does not result with internalizing the topic. They were not fully engaging and understanding the whole process! Still there was a missing last part, what was that?

During our additional cognitive research, we found that human brain has a complex way of learning process and there is even an interdisciplinary study, called “Cognitive Science” consisting of four disciplines:

  • Psychology
  • Linguistics
  • Philosophy
  • Computer Modelling

Upon to our additional research, we found a model which boosts the “internalizing” process of the human brain: Sharon Bowman 4C

4C model simply consists of the following 4 steps:
1- Connection: The existing connections related with the subject
2- Concept: Related concepts regarding the topic
3- Concrete Practice: An end-to-end exercise(s) related with the topic.
4- Conclusion: The subject learned by this time is summarized in 1 or 2 sentences.

On the top of that, we find a way to convert those workshops into a “Dojo” manner where the attendees are expected to investigate the solution path themselves. So we define “What to do?” and attendees should find “How to do?” .

As interactivity increases, internalizing the topics also increases

As interactivity increases, internalizing the topics also increases

Each Dojo is consists of several Katas, in other words several practices. As by definition, repetitive Katas on the same topics helps the attendees to internalize the topic better.

“Code Kata is a term coined by Dave Thomas, co-author of the book The Pragmatic Programmer. Code Kata is an exercise in programming which helps a programmer hone their skills through practice and repetition”

 

Here is our Dojo rules:

  1. Less slide, Concepts are discussed on whiteboard!
  2. All hands-on: Everyone is expected to ”code”
  3. No step-by-step instructions, Google it!
  4. We are here as mentors to help you:
  5. Asking question is FREE, Ask it!
  6. Whoever finishes the kata, help the others!

 

If you are interested to participate to one of our Dojo, please check the following pages:
https://kloia.com/training
https://kloia.co.uk/training

03 Jan 2019
by onursalk
Comments are closed

kloia achieves AWS DevOps Competency!

London — November 14,2018 — Kloia, a new-era digital transition solution provider, announced today that it has achieved Amazon Web Services (AWS) DevOps Competency status. This designation recognizes that Kloia provides deep expertise on DevOps practices and culture to help customers implement continuous integration and continuous delivery practices or helping them automate infrastructure provisioning and management with configuration management tools on AWS.

kloia AWS DevOps Competency

Achieving the AWS DevOps Competency differentiates Kloia as an AWS Partner Network (APN) member that provides specialized demonstrated technical proficiency and proven customer success with specific focus on Containerization and Orchestration including Kubernetes and EKS, Continuous Integration & Continuous Delivery, Monitoring, Logging, and Performance, Infrastructure as Code, DevSecOps or Consulting. To receive the designation, APN Partners must possess deep AWS expertise and deliver solutions seamlessly on AWS.

28 Aug 2018
by onurgurdamar
Comments are closed

Application Modernization @ AWS Solution Space

Great news everyone!
kloia is now a solution provider at AWS Solution Space!

Our solution to modernize the legacy applications has been accepted for AWS Solution Space platform.

Software development and deployment practices change so frequently which can suddenly turn your software stack into a “legacy”. The consequence of this can be a redevelopment of your software stack “from scratch” which implicitly has opportunity costs and risks. We offer you an alternative way to “modernize” your software stack, keeping your current codebase, applying refactorings and all the latest practices in DevOps.

Our solution provides Application Modernization on AWS using Jenkins, Prometheus, Logz.io, NewRelic, Slack, OpsGenie and TestGenie.

We provide 3 different approaches for helping companies modernizing their legacy applications.

  • .Net Modernization
  • .Net Core Modernization
  • .NET to .NET Core Transition

All 3 approaches include creating all environment using Terraform, creating the pipeline using Jenkins or CodePipeline including CI (Continuous Integration) and CD (Continuous Delivery) practices, logging application diagnostics and reporting, behavior monitoring and alert management.

Solution steps are;

  1. Assessing your current framework and creating an appropriate road map
  2. Enhancing and modernizing the platform architecture
  3. Splitting portable parts from non-movable parts and migrating
  4. Enriching support for foundational, cross-platform applications

You can read more about the solution;
.Net Modernization and .Net Core Modernization

Please go and check our page on AWS Solution Space!

01 Aug 2018
by dsezen
Comments are closed

kloia is now the 3rd biggest Service Provider Company according to Bilisim 500+

kloia, new-era Solution Provider on DevOps, Cloud and Microservices, achieved to be the 3rd biggest Service Provider Company within the Bilisim 500+* category. 

kloia team is growing rapidly and globally, now have customers world-wide, from US, UK, EU, Turkey and Dubai.

Our family is getting bigger every minute, every hour, every day! Thank you for choosing us. Special thanks to Customers who are innovating with us.

* Companies younger than 3 years old

bilisim500

bilisim500

18 Jul 2018
by onurgurdamar
Comments are closed

Dockerizing and Deploying a .NET Core Application to AWS ECS Fargate

In this post I’m going to explain how we can dockerize a dotnet core application and deploy it to AWS ECS Fargate.

First things first,

As a starting point it might be good to know what is Docker and what does a container mean.
If you are already familiar with those please just scroll down 🙂

If not please take a look at my previous post Docker 101 from here.

The main question you should ask is what AWS ECS service is?

AWS ECS (Amazon Elastic Container Service) is a highly scalable, high-performance container orchestration service that supports Docker containers and allows you to easily run and scale containerized applications on AWS.

Sounds great, right? But what does that mean?

It’s easy; when you want to run an application inside a Docker container, you mainly need an environment which is the servers in most cases. Sounds cool but how can we manage containers in those servers? Yes! This is the right question to ask. AWS gives you ECS to help you manage/orchestrate your containers.

(Please be advised that another orchestration system is mighty Kubernetes aka k8s and AWS service that supports k8s is EKS which I’ll write a separate blog for it.)

All good but what is Fargate? The answer is one step further! Truly serverless! Oh sounds interesting isn’t it? Do you remember I’ve told you need a server to run the containers? Now with Fargate you actually don’t need a single server! AWS handles everything for you!

If you are interested in serverless or want to learn more about it, please be patient, there will be another post for it 😉

Let’s start to talk about ECS.
First of all you need a Dockerfile so you can tell Docker daemon how to create the Docker image.

After creating the Docker image we need to register it to a repository. It can be Docker Hub, Amazon ECR or even your private repository.
After registering the image with the repository we need to create a service and task definition.
A service is basically our application in ECS world. You need to define the service. It can be a service that will be ready to response any time (like web applications) or it can be an application that needs to be ran only once or we can set a schedule to run on specific times.

Deployment types for ECS

The task definition is the parameters for ECS to do the orchestration.
We can define the vCPU and memory amount here. How many tasks we want? Also we can define environment variables for our application.

When you create an EC2 Cluster, ECS spins up some EC2 instances so it can use them for orchestrating. But if you select Fargate, you will have no idea about any server. Truly serverless!

 

Comparison of EC2 and Fargate Clusters

The orange boxes on the left side of the image above are EC2 instances. ECS puts some information to the user data so it knows which instances are belong to which cluster. ECS distributes tasks (actually containers that have our application) to the instances. But as you can see on the right hand side, there are no EC2 instances just service and tasks. We don not need to think about virtual servers maintaining. In Fargate you will be billed per vCPU per second and per GB memory per second. With the classical saying you will pay what you use!

Now it’s time to explain how we can deploy a dotnet core application. Microsoft likes providing developer-friendly tools 🙂
We can easily prepare our dotnet core application by using “Add Docker Support” option for containerization. Thanks Microsoft!

This option actually adds a Dockerfile and docker-compose files to the solution.

 

Sample Dockerfile

 

Please take a moment to read the sample Dockerfile. It is very simple. It uses aspnetcore:2.0 image to build & run the application, copies the solution and project files to the image, restores the packages, build the code and publishes the artifacts. Lastly creates and entrypoint which is running dotnet command with StarWars.dll argument.

When you run the application locally. Visual Studio automatically builds the image using the Dockerfile and docker-compose files. You can check the image using ‘docker images’ command on command line. You can also check the container with ‘docker ps’ command on command line.

Before moving on let’s create a cluster so we can deploy our application.

Step 1 is the place where we need to make a choice. Do we want to manage EC2 servers or do we want to go on fully serverless. Networking only option is the Fargate option (serverless) If you choose EC2 Linux or EC2 Windows, ECS will create EC2 instances to orchastrate the containers. Of course these options have a maintaining cost to us. So let’s move on with the Fargate.

We need to give our cluster a name and select our VPC in the next section. This is a very fast process to create the cluster. Be careful if you go with the EC2 clusters it will take some time to spin up EC2 instances.

Now we are ready to deploy our application from Visual Studio. We need to install AWS Toolkit for Visual Studio to be able to deploy our application using Visual Studio.
You can download the toolkit from this link.

Actually the best practice is creating a pipeline to do the deployment automatically but this is out of topic for now. Will be in the next posts as well 😉
We can right click to project that we want to deploy and choose “Publish Container to AWS” option.

First step is the selecting the AWS Region, build configuration (Debug/Release), image tag and deployment type.

 

We need to specify the cluster, vCPU, memory, subnet and security group options here in the launch configuration.

 

Then we need to define our service. Here we can select an existing service but now we are deploying for the first time because of that let ECS to create a new service for us. We also can define the number of tasks (for high-availability we have to set more than 1)

Minimum Healthy Percentage means at least that percentage of tasks have to be healthy so ECS can consider the service as healthy.
Maximum Percent means that at most that percentage of tasks can run at the same time.

 

As a best practice we need to create an load balancer. ECS uses Application Load Balancer (ALB) for containerized applications.

 

Last step is the creating the task definition. Security is everything for AWS, because if that we have to set some roles so ECS can execute necessary actions on your account.

 

When you click “Publish” Visual Studio will create the image register is to ECR, create the service & definition and run the application.

 

This is all we need to do to dockerize our dotnetcore application and deploy it to the AWS ECS Fargate.

As I told before deployment should be handled by a pipeline so it can be done automatically. Using AWS Console and Visual Studio tools are good for learning and testing but in production everything must be tested and automated.

You can check our web pages for detailed information as well;

https://dotnetcore.kloia.com
https://dotnet.kloia.com

If you have any questions please do not hesitate to ask.

Cheers!

23 Apr 2018
by dsezen
Comments are closed

kloia achieves AWS Competency!

London— April 4, 2018– Kloia, a new-era digital transition solution provider, announced today that it has achieved Amazon Web Services (AWS) Microsoft Workloads Competency — Application Modernization Category as first AWS Partner in EMEA region!

This designation recognizes that kloia provides .NET core transitions with containerization to help customers’ design, migration, deployment, and management of Microsoft-based .NET applications into Linux based Kubernetes Clusters on AWS by adopting DevOps practices.

Achieving the AWS Microsoft Workloads Competency differentiates Kloia as an AWS Partner Network (APN) member that provides specialized demonstrated technical proficiency and proven customer success with specific focus on workloads based on Application Modernization Solutions. To receive the designation, APN Partners must possess deep AWS expertise and deliver solutions seamlessly on AWS.

AWS Microsoft Workloads — Application Modernization Competency Award

 

We are honored to have achieved AWS Microsoft Workloads Competency status and looking forward to help our customers modernize their applications and leverage the agility, breadth of services, and pace of innovation that AWS provides.

 

AWS is enabling scalable, flexible, and cost-effective solutions from startups to global enterprises. To support the seamless integration and deployment of these solutions, AWS established the AWS Partner Competency Program to help customers identify Consulting and Technology APN Partners with deep industry experience and expertise.

 

Kloia’s approach to application modernization covers re-platforming and re-factoring of the legacy or monolithic application stacks. Kloia has been already performing transition projects in various stacks and will now be able to also address .NET-to-.NET core transition utilizing containers and orchestration systems.

19 Dec 2016
by dsezen
Comments are closed

NewYear Lottery

newyear

2017 senesinin herkese şans ve başarı getirmesi dileklerimizle….

 

23 Aralık'a gelen başvurular arasında yapılan çekilişte hediye kazananlar altaki gibidir:

USB DevOps anahtarı : Oğuzhan Cengiz, Oktay Sabak, Onur Şimşek, Orhun Çıraklı

CloudFlare t-shirt: Numan Kaçar, Tuhanan Pehlivan, Emre Torun, Hülya Çakır

1 günlük eğitim katılım hakkı: Çağrı Özer

19 Dec 2016
by dsezen
Comments are closed

DevOps anahtarınız bizden!

DevOps'da bazı kapıları açmak zor, özellikle kültür işin içine girince… Biz her şirketin, kendine özel bir DevOps yolculuğu olması gerektiğine inanıyoruz. Bu yolculuk boyunca alınan geribildirimler ve yapılan deneyler ile alınması gereken kararlar ve güncellenmesi gereken birçok şey var: İletişim, süreçler, araçlar…

Sizler için bu yolculukla ufkunuzu açacağına inandığımız, "public" olarak indirilebilir dökümanları bir araya toplandık. Bunları da her zaman anahtarlığınızda taşıyabileceğiniz bir USB içerisine kopyaladık.

key4devops

 

 

Cloud, DevOps and Microservices Solution Provider © 2025