Great Deal! Get Instant $10 FREE in Account on First Order + 10% Cashback on Every Order Order Now

Assignment #1: Data model [10%] This assignment relates to the following Course Learning Requirements: CLR 1: Identify, explain, and use various technologies used in the Enterprise...

1 answer below »
Assignment #1: Data model [10%]
This assignment relates to the following Course Learning Requirements:
CLR 1: Identify, explain, and use various technologies used in the Enterprise environment.
CLR 3: Implement Web Server integration with enterprise applications.
CLR 5: Utilize as well as defend against common security vulnerabilities found in enterprise applications and the multi-server networked environment.
CLR 6: Implement and Integrate various Java based technologies used in the enterprise environment.
Objective of this Assignment:
Check your ability to effectively execute the planning of enterprise applications by defining a schema of a data model in terms of MVC and to choose a communication model.
Pre-Assignment Instructions:
1. To prepare you for this assignment, read the modules 3, 4 and 5 content and follow the embedded learning activities.
2. You require a drawing software that allows you to draw diagrams and charts.
a. The following tool is a suggestion:
i. Standalone app of https:
app.diagrams.net/ which can be downloaded here https:
github.com/jgraph/drawio-desktop
eleases/tag/v13.6.2
ii. You are welcome to use any other such as Microsoft Visio (Windows only), pen and paper (literally).
3. You also must have the following database installed on your computer.
a. For simplicity:
i. I used MySQL v.8+ (https:
dev.mysql.com/downloads/), and provided you an example of the data model in module 5.
ii. You may use any database you wish, moreover if you decide to switch your solution to NoSQL it would be considered as a bonus – NOTE: it should be co
ect and workable, otherwise no bonuses will be added).
Assignment description:
In this assignment you will start by developing a backend for a Twitter-like application.
The functionality of the application should have the following:
1. At least 2 roles
a. Producer; and
. Subscribe
2. The Producers role is the same as the Subscriber, but has some extra capabilities:
a. The producer can produce messages, which the Subscribers get
3. User may have both roles at the same time;
4. Users having the Subscriber role, which means they can subscribe to as many Producers as they want to;
5. All messages stored in the database, can be easily searched based on the following criteria:
a. User (Producer) ID – means who wrote it;
. Message content
Assignment Tasks:
What you should do:
1. Acquiring MVN design pattern, you need to build a data model for the application described in the Assignment Description section together with API Contracts
2. As a result, you should have the following:
· ERD diagram (if you use RDBMS) as a picture of JPEG/PNG format;
· Script to create the data model in the DB (SQL for RDBMS, CQL for Cassandra, etc.), which should be runnable, and creates an expected data structure;
· Script to populate the database with few records (2-3 users, 5-7 messages);
· A minimum of the following queries:
i. Get list of users;
ii. Get list of content producers;
iii. Get list of content subscribers (full);
iv. Get list of content subscribers, subscribed on specific producer;
v. Get all messages;
vi. Get all messages created by specific producer;
vii. Get all messages for given subscriber (it may include messages from multiple producers).
Assignment Submission
3. You need to submit an archive with name pattern {Course #}_{Section #}_{Last name}_{First name}.zip containing the following:
· ERD diagram (if you use RDBMS) as a picture of JPEG/PNG format;
· Script to create the data model in the DB (SQL for RDBMS, CQL for Cassandra, etc.);
· Script to populate the database with few records (2-3 users, 5-7 messages);
· List of queries in regular text file;
· Screenshots demonstrating the result of each query execution.
1
Assignment Grading Ru
ic (10%)
    Criteria
    Excellent
80-100%
    Good
50-79%
    Requires Improvement
50%
    Points
    Assignment Quality
    All information provided is accurate
All information is clearly expressed and well explained
Contains original ideas, connections or applications
    Most information provided is accurate
All information clearly expressed and explained
Contains mainly original ideas, connections or applications
    Some or no provided information offered
Information is rarely or never clear and require further explanation
Many non- original ideas, or unclear connections or applications
    /1
    Comments
    
    
    
    
    Assignment Knowledge and Skills Demonstration
    Clear, concise synthesis of course content to demonstrate understanding of topic
All ideas are clearly developed, organized logically, and connected with effective transitions
Explores ideas, supports points fully using a balance of evidence, uses effective reasoning to make useful distinctions
All relevant course and topic links are made
    Evidence of some synthesis of course content to demonstrate understanding of topic
Some unified and coherent ideas are developed with effective transitions
Supports most ideas with effective examples, and/or references, and details, makes key distinctions
Most relevant course and topic links are made
    Lack of evidence or weakness in the synthesis of course content to demonstrate understanding of topic
Develops and organizes ideas that are not necessarily connected. Some ideas seem illogical and/or unrelated
Presents ideas in general terms, most ideas are inconsistent/unsupported, and reasoning is flawed or unclea
Some or no relevant course and topic links are made
    /8
    Comments
    
    
    
    
    Assignment Structure
    Formatted as per assignment details
Structure and format enhance delivery of the information
    Formatted as per assignment details in most components
Structure and format fits well with the delivery of the information
    Formatting has not been followed
Structure and format are unclear and impedes delivery of the information
    /1
    Comments
    
    
    
    
    Total Points
    10
    7
    5
    /10

Module 3: Application Environment and Integration
Introduction
This module introduces different techniques of microservice integration into the whole ecosystem. You’ll discover two different approaches of scalability applications in order to maintain high load, the importance of service discovery and load balancers, as well as, seeing S.O.L.I.D. principles in practice.
Learning Outcomes
By the end of this module, you should be able to complete the following:
1. Enumerate and apply scaling technics of enterprise applications;
2. Explain the scope of each microservice and perform functional decomposition at a high level.
Key Terms & Concepts
Scalability: The property of a system for handling a growing amount of work by adding resources to the system.
Service Discovery: Another process which aims to perform at least one function - automatic detection of devices and services offered by these devices on a computer network to reduce the configuration efforts from users. This process helps to represent the system as a monolith with a single point of entry.
Load Balancer: Any software or hardware device that facilitates the load balancing process for most computing appliances, including computers, network connections and processors. It enables the optimization of computing resources, reduces latency, and increases output and the overall performance of a computing infrastructure.
Orchestration: The activity of coordinating calls to several different services, in order to process a single service request.
Scalability Problem
The load on your applications varies depending on the time of the day, the day of the month or the month of the year.
Take for instance, amazon.com. Its loads are very high around Thanksgiving, up to 20 times the normal load. However, during major sporting events, such as the Super Bowl or a FIFA World Cup, the traffic could be considerably less - because everybody is busy watching the event. It is quite possible that the infrastructure needs to handle 10x the normal load.
How is it possible to serve such a number of requests per one single application?
The actual problem is shown in the following figure.
(C) Figure 1 - Hi Load Problem
It’s not much of an issue when the number of incoming requests is low, and your application is capable of serving them. However, what will happen if the number of requests raises significantly? Imagine this situation: Your service starts adding incoming messages to the queue, where they continue growing to the point where some of them become not actual any more (time to live factor), or worse, the incoming queue overflows –leading you to an even bigger problem.
Although this is a possibility, in the microservice world, the answer is simple – It would not be considered a single application anymore.
Recall from the previous module, we discussed the 4 main criteria of any microservice applications This is important for you, as those criteria make it possible to scale an application horizontally:
Small: Microservices are designed to be small. Some of the estimation techniques like function points, and use cases, may be used to define the size of the microservice. However usually by “small” you have to mention the amount of the functionality each of the microservices performs, and generally one microservice should be responsible for only one independent block of functionality of the ecosystem.
Stateless: A stateless application handles every request with the information contained only within it. A microservice must be stateless, and it must service the request without remembering the previous communications from the external system.
In(ter)dependent: Microservices must service the request independently; they may collaborate with other microservices within the ecosystem. For example, a microservice that generates a unique report after interacting with other microservices is an interdependent system. In this scenario, other microservices, which only provide the necessary data to the reporting microservices, may be independent services.
Full-Stack Application: A full stack application is individually deployable. It has its own server, network & hosting environment. The business logic, data model and the service interface (API or GUI) must be part of the entire system. A microservice must be a full stack application.
How to scale an application horizontally? What does it mean?
Simply, you have to run it a second time. Let’s take another look at Figure 1 in greater detail.
(C) Figure 1 - Hi Load Problem
In Figure 1, you can see only one application and five clients trying to reach it. This is a fairly common situation, considering that the application is run on your powerful server somewhere at the datacenter. In addition, you will also use vertical scalability techniques at the same time, in order to use your hardware at its full power without making it idle and waste electricity.
Keeping this in mind, what happens if you simply start another application? How will your client know where to go and how to reach that second application or another instance of that application?
To answer the question, naturally, you can assign a different IP address and even the DNS’s name to that application in the case of a web service; but, would that be convenient for a client?
To help better explain the convenience of a client, let’s use the following example:
One client connects to your service using a DNS name like www.example.com, but at one point, it figures out that the service is not responding, therefore requiring he or she to use another address which is www1.example.com.
Is this a convenient approach for your customers? And how do they know which IP address or DNS name they need to connect to, in order to ensure they receive first-class performance?
The answer to all these questions is a Load balancer. Load balancers will save you in these situations.
Load Balancing
What is the load balancer? A load balancer is any software or hardware device that facilitates the load balancing process for most computing appliances, including computers, network connections and processors. It enables the optimization of computing resources, reduces latency, and increases output and the overall performance of a computing infrastructure.
If this is that clear, then take a look at the Figure 2:
(C) Figure 2 - Load Balance
Load balancing is a method to distribute incoming socket connections to different servers. It is not distributed computing, where jobs are
oken up into a series of sub-jobs, so each server does a fraction of the overall work. It is not that at all. Rather, incoming socket connections are spread out to different servers. Each incoming connection will communicate with the node it was delegated to, and the entire interaction will occur there. Each node is not aware of the other nodes’ existence.
Why is load balancing required?
Scalability: If your application becomes busy, resources, such as bandwidth, cpu, memory, disk space, disk I/O, and more, may reach their limits. In order to remedy such a problem, you have two options: scale up, or scale out. Load balancing is a scale out technique. Rather than increasing server resources, you add cost effective, commodity servers, creating a “cluster” of servers that perform the same task. Scalability out is more cost effective, because commodity level hardware provides the most ‘bang for the buck’. High- end super computers come at a premium, and can be avoided in many cases.
Redundancy: Servers crash; this is the rule, not the exception. Your architecture should be devised in such a way to reduce or eliminate single points of failure (SPOF). Load balancing a cluster of servers that perform the same role, provides room for a server to be taken out manually for maintenance tasks, without taking down the system. You can also withstand a server crashing. This provides you High Availability. Load balancing is a tactic that assists with High Availability.
How to perform load balancing? There are 3 well-known ways:
1. DNS based:
This is also known as round robin DNS. You can inject multiple A records for the same hostname. This creates a random distribution – requests for the hostname will receive the list in a random order. If you wish to weigh it (say serverA can take 2x the number of requests
Answered 2 days After Jan 24, 2023

Solution

Mohd answered on Jan 27 2023
53 Votes
Entity Relationship Diagram:
-- creating database
--create database Test_D
use Test_D
-- creating producer table
CREATE TABLE Producer (
producer_ID INT IDENTITY(101, 1) PRIMARY KEY,
Producer_NAME VARCHAR(50) NOT NULL,
tweet_message VARCHAR(280),
EMAIL VARCHAR(70)
)
GO
-- creating Subscriber table
CREATE TABLE Subscriber (
Subscriber_ID INT IDENTITY(201, 1) PRIMARY KEY,
Subscriber_NAME VARCHAR(10) NOT NULL
)
GO
-- Creating mapping table to join producer and subscriber table
CREATE TABLE TBL_MAPPING (
producer_ID INT REFERENCES Producer(producer_ID),
Subscriber_ID INT REFERENCES Subscriber(Subscriber_ID)
)
GO
--- Inserting input into Producer table
INSERT INTO Producer VALUES
('Kallie Blackwood', 'Thank you for everything.', '[email protected]'),
('Johnetta Abdallah', 'My last ask is the same as my first.', '[email protected]'),
('Bo
ye Rhym', 'Always in my...
SOLUTION.PDF

Answer To This Question Is Available To Download

Related Questions & Answers

More Questions »

Submit New Assignment

Copy and Paste Your Assignment Here