๋ชฉ์ฐจ
-
์นด์นด์ค , ๊ตฌ๊ธ ๋ก๊ทธ์ธ With Spring Security
-
์คํ๋ง ์ํ๋ฆฌํฐ๋ฅผ ์ ์ฉํ๊ณ ๋๋ ๋ฌธ์ ์
-
Resolver
-
Kafka ์ค์น ( kafka- kraft ๋ชจ๋ )
-
์ค์น ๋ฐฉ๋ฒ
-
java์ค์น
-
kafka ec2์ ์ค์น
-
์นดํ์นด ํด๋ฌ์คํฐ ID ์์ฑ
-
๊ฐ ๋ธ๋ก์ปค์ ๋ํ ์ค์ ํ์ผ ์์ฑ ๋ฐ ์์
-
๊ฐ ๋ธ๋ก์ปค์ ๋ฐ์ดํฐ ๋๋ ํ ๋ฆฌ ์ด๊ธฐํ
-
๋ฐฑ๊ทธ๋ผ์ด๋๋ก ์คํ
-
SonarQube
์ด๋ฒ ์คํ๋ฆฐํธ ๊ธฐ๊ฐ๋์ ๊ฐ๋ฐํ ๋์ ์์
- ์นด์นด์ค, ๊ตฌ๊ธ ๋ก๊ทธ์ธ ( ๊ตฌ๊ธ์ ๋ณด์ ํ์ )
- ๋ก๊ทธ์์, ํ์ ํํด
- ์ผ๊ธฐ ์์ธ ํ์ด์ง API
- ์ฆ๊ฒจ์ฐพ๊ธฐ, ํ๊ฐ , ๊ฐ์ API
- ์นดํ์นด ๋ฐฐํฌ
- ์๋ํ๋ธ๋ฅผ ํตํ ์ฝ๋ํ์ง ํ์ธ
์นด์นด์ค , ๊ตฌ๊ธ ๋ก๊ทธ์ธ With Spring Security
- Spring Security ๋ฅผ ์ ์ฉํ๊ธฐ ์ํด Spring Security in Action ์ฑ ์ ์ฝ๊ณ ๊ฐ๋ฐ์ ํ์๋ค!
- ํ์ง๋ง ํด๋น ์ฑ ์ Spring Security 5.x.x ๋ฒ์ ์ผ๋ก ํ์ฌ ๋ด๊ฐ ๊ฐ๋ฐํ๊ณ ์๋ springboot 3.x.x ๋ฒ์ ์๋ ์๋ง์ง ์์๋ค.
- ์คํ๋ง ์ํ๋ฆฌํฐ์ ๊ธฐ๋ณธ์ ์ธ ๋ก์ง์ ๋ค์๊ณผ ๊ฐ๋ค.

- ์ธ์ฆ ํํฐ๋ฅผ ๊ฑฐ์น๊ณ ์ธ์ฆ ๊ด๋ฆฌ์๋ฅผ ๊ฑฐ์นํ , authenticatino provider์๊ฒ ์ฌ์ฉ์ ์ธ๋ถ ์ ๋ณด์ ์ํธ ์ธ์ฝ๋๋ฅผ ๋ฐ์์ ์ธ์ฆ์ ํ ์ดํ ํด๋น ์ ์ ์ ๋ํ ์ ๋ณด๋ฅผ ๋ณด์ ์ปจํ ์คํธ์ ์ ์ฅํ๋ค.
- ํ์ง๋ง ์ด๋ฒ ํ๋ก์ ํธ๋ ์ ํ๋ฆฌ์ผ์ด์ ๊ฐ๋ฐ์ด๊ธฐ ๋๋ฌธ์ ์ฌ์ฉ์์ ๋ณด๋ค ์ฌ์ด ๋ก๊ทธ์ธ์ ๋๊ธฐ ์ํด OAuth ์ JWT๋ฅผ ์ฌ์ฉํด ๋ก๊ทธ์ธ์ ํ ์ ์๋๋ก ํ์๋ค.

- ์ด๋ฒ ๊ฐ๋ฐ์์ ์ฌ์ฉํ ๋ก๊ทธ์ธ์ ํ๋ฆ๋๋ ๋ค์๊ณผ ๊ฐ๋ค.
- ๋ก๊ทธ์ธ์ ์์ฒญํ๋ฉด ํด๋ผ์ด์ธํธ๋ sns ๊ณ์ ์ ํตํด ๋ก๊ทธ์ธ์ ํ๋ค.
- ์ดํ ๋ก๊ทธ์ธ์ ํ๋ฉด ์ ๊ณต๋๋ ์น์ธ์ฝ๋๋ฅผ ํตํด ์์ ๊ณ์ ์ ํ ํฐ์ ๋ฐ๊ธ๋ฐ๋๋ค.
- ๋ฐ๊ธ ๋ฐ์ ํ ํฐ์ ํ ๋๋ก ํด๋น ์ ์ ์ ์ ๋ณด๋ฅผ ๊ฐ์ ธ์ ์ฐ๋ฆฌ์ ์๋น์ค์ ํ์์ ๊ฐ์ ์ํจ๋ค.
- ์ดํ ๊ฐ์ ๋ ์ ์ ์๊ฒ Jwt๋ฅผ ๋ฐ๊ธํ๋ค.
- ๋ฐ๊ธํ JWT์ ์ ์ ์ ๋ณด๋ฅผ spring security Context ์ ๋ฃ์ด ์ ๊ทผ๊ฐ๋ฅํ๊ฒ ํ๋๋ก ํ๋ค.
- ์ดํ spring security ์ filterchain์ ํตํด ๋ค์ด์ค๋ ํต์ ์ ๋ํด jwt๊ฐ ์ฌ๋ฐ๋ฅด๋ค๋ฉด ์ฑ์ ์ ์์ ํ๊ฐํ๋ค.
- ์ฌ๊ธฐ์ filterchain์ JwtAuthenticationFilter๋ฅผ usernamePasswordAuthenticationFilter ์์ ๋ฐฐ์นํด ํํฐ์ฒด์ธ์ด ์ ์ฉ๋๋๋ก ํ์๋ค.
์คํ๋ง ์ํ๋ฆฌํฐ๋ฅผ ์ ์ฉํ๊ณ ๋๋ ๋ฌธ์ ์
- ์คํ๋ง ์ํ๋ฆฌํฐ๋ง์ ์ ์ฉํ๊ณ ๋๋, ์๋ฌ๊ฐ ๋๋ฉด 403 ์๋ฌ๋ก ๋ฌด์กฐ๊ฑด ๋น ์ ธ๋ฒ๋ ธ๋ค.
- ์คํ๋ง ๊ณต์ ๋ธ๋ก๊ทธ์ ๋ฐ๋ฅด๋ฉด, ์คํ๋ง๋ถํธ์์๋ ์๋ฌ๊ฐ ๋ฐ์ํ๋ฉด /error๋ผ๋ URI๋ก ๋งคํ์ ์๋ํ๋ค. ์ค์ ๋ก ํด๋น URI๋ก ์ด๋ํ๋ฉด ์๋์ ๊ฐ์ ํ์ด์ง๊ฐ ๋ํ๋๋ค.
- Whitelabel Error Page ์์ฒด๋ 403 ์๋ฌ์ ๊ด๋ จ์ด ์์ง๋ง ์๋ฌ๊ฐ ๋ฐ์ํ๋ฉด /error๋ก ๋งคํ์ ์๋ํ๋ค๋ ๊ฒ์ด ํต์ฌ์ด๋ค.
- ํ์ง๋ง ์ฐ๋ฆฌ๋ /error ์๋ํฌ์ธํธ์ ๋ํด์ ํ๊ฐํด์ฃผ์ง ์์๊ธฐ ๋๋ฌธ์ ์๋ฌํ์ด์ง๋ก ์ด๋ํ ๋ ํ ํฐ์ด ์์ด 403 ์๋ฌ๊ฐ ๋๋ฒ๋ ธ๋ ๊ฒ์ด๋ค.
- ์ดํ ํ ํฐ์ ๋ํ ์๋ฌ์ฝ๋๋ค์ ์์ฑํ ์ดํ jwtAuthenticationFilter ์์ entryPoint์ ๋ํ ํํฐ์ฒด์ธ์ ๊ฑธ์ด์ฃผ์๋๋ ํ ํฐ์ ๋ํ ์๋ฌ๋ค๋ ํ์ธํ ์ ์์๋ค.
Resolver
- JWT ํ ํฐ์ ๋ณด๋ผ๋ ๋ง๋ค ํด๋น ํ ํฐ์์ ์ ์ id์ role์ ๊บผ๋ด๊ณ ์ถ์๋ค.
@Target(ElementType.PARAMETER)
@Retention(RetentionPolicy.RUNTIME)// ๋ฐํ์๋์ ์ ์ง
@Parameter(hidden = true)// swagger์์ ๋ณด์ด์ง ์๊ฒ ์ค์
public @interface AuthUser {
}
- ๋ค์๊ณผ ๊ฐ์ ์ด๋ ธํ ์ด์ ์ธํฐํ์ด์ค๋ฅผ ๋ง๋ค๊ณ ,
@Component
public class AuthUserArgumentResolver implements HandlerMethodArgumentResolver {
public AuthUserArgumentResolver() {
}
@Override
public boolean supportsParameter(MethodParameter parameter) {
return parameter.getParameterType().equals(JwtTokenInfo.class) &&
parameter.hasParameterAnnotation(AuthUser.class); // ์ง์ ํ๋ผ๋ฏธํฐ ํ์
}
@Override
public Object resolveArgument(MethodParameter parameter, ModelAndViewContainer mavContainer,
NativeWebRequest webRequest, WebDataBinderFactory binderFactory) {
Authentication authentication = SecurityContextHolder.getContext().getAuthentication();
Claims claims = (Claims) authentication.getPrincipal();
Long userId = Long.parseLong((String) claims.get(JwtProperties.USER_ID));
UserRole userRole = UserRole.valueOf((String) claims.get(JwtProperties.USER_ROLE));
return JwtTokenInfo.builder()
.userId(userId)
.userRole(userRole)
.build();
}
}
- resolver๋ฅผ ์ฌ์ฉํด @auth ์ด๋ ธํ ์ด์ ์ผ๋ก ๋ค์ด์ค๋ jwt์ ๋ํด ํ ํฐ์ id์ role์ ๊บผ๋ผ ์ ์๋๋ก ํ์๋ค.
@Configuration
@RequiredArgsConstructor
public class WebConfig implements WebMvcConfigurer {
private final AuthUserArgumentResolver authUserArgumentResolver;
@Override
public void addArgumentResolvers(List<HandlerMethodArgumentResolver> resolvers) {
resolvers.add(authUserArgumentResolver);
}
}
- ์ด resolver๋ฅผ webMvcConfigure์ ์ถ๊ฐํ์ฌ ์ฌ์ฉํ ์ ์๋๋ก ํ์๋ค.
Kafka ์ค์น ( kafka- kraft ๋ชจ๋ )
- ์ง๋ ์คํ๋ฆฐํธ ๊น์ง๋ ์นดํ์นด๋ฅผ ๋ก์ปฌ์์ ๋๋ ค์ ํ ์คํธ์ฉ์ผ๋ก ์ฌ์ฉ์ ํ์๋ค.
- ์ด๋ฒ ์คํ๋ฆฐํธ์์ kafka๋ฅผ ์ค์นํด ์ค์ ai ์๋ฒ์ ๋ฉ์ธ์ง๋ฅผ ์ฃผ๊ณ ๋ฐ๊ณ ํ๋ฉด์ ๊ทธ๋ฆผ์ผ๊ธฐ ์์ฑ ๋ก์ง์ ์์ฑํ๊ณ ์ ํ์๋ค.
- ๊ธฐ์กด์ kafka๋ zookeeper์ ๊ฐ์ด ์ฌ์ฉํ๋ ํด์ด์๋ค. ์ฃผํคํผ๋ ์นดํ์นด์ ๋ฉํ์ ๋ณด๋ฅผ ๊ด๋ฆฌํด์ฃผ์๋ค.
- ํ์ง๋ง 2.7 ๋ฒ์ ๋ถํฐ ์ฃผํคํผ์ ์์กด์ฑ์ ์ ๊ฑฐํ๊ณ ์ ํ์๊ณ , 3.5.x ๋ฒ์ ๋ถํฐ ์์กด์ฑ์ ์ ๊ฑฐํ Kraft ๋ชจ๋๋ฅผ ๋๋ค.
- 4.x.x ๋ฒ์ ๋ถํฐ๋ ์ฃผํคํผ์ ์์กด์ฑ์ ์์ ์ ๊ฑฐํ๋ค๊ณ ํ๋ค.
์ค์น ๋ฐฉ๋ฒ
- ์๋ ์์ฉ ํ๊ฒฝ์์๋ ์ธ์คํด์ค๋ฅผ ์ฌ๋ฌ๊ฐ ์ฌ์ฉํ์ฌ, ์ปจํธ๋กค๋ฌ 3๊ฐ, ๋ธ๋ก์ปค 3๊ฐ๋ฅผ ์ฌ์ฉํ์ฌ ์ด์ํ์ง๋ง, ๋น์ฉ์ ๋ฌธ์ ์์ผ๋ก ํ๋์ ์ธ์คํด์ค์ ๋ธ๋ก์ปค์ ์ปจํธ๋กค๋ฌ๋ฅผ ๋๋ค ์ฌ์ฉํ๋ 3๊ฐ์ ๋ธ๋ก์ปค๋ฅผ ์ฌ์ฉํ๋ค.
- ํฌํธ๋ ๋ค ๋ค๋ฅด๊ฒ ์ฌ์ฉํ๋ค.
- ๋ธ๋ก์ปค 1 : 9092, ๋ธ๋ก์ปค 2 : 9093 , ๋ธ๋ก์ปค 3 : 9094
- ์ปจํธ๋กค๋ฌ 1: 9095, ์ปจํธ๋กค๋ฌ 2: 9096, ์ปจํธ๋กค๋ฌ 3: 9097
java์ค์น
sudo apt update
sudo apt upgrade
sudo apt install openjdk-17-jdk -y
kafka ec2์ ์ค์น
wget https://downloads.apache.org/kafka/3.7.1/kafka_2.13-3.7.1.tgz
tar -xzf kafka_2.13-3.7.1.tgz
sudo mv kafka_2.13-3.7.1 /opt/kafka
mkdir -p /opt/kafka/logs/broker{1,2,3}
- ์นดํ์นด๋ฅผ ๋ค์ด๋ฐ๋๋ค.
- ์์ถ ํด์
- ์นดํ์นด ํ์ผ์ ์ฎ๊น
- ์นดํ์นด ๋ก๊ทธ๋ฅผ ์ฐ์ ๋๋ ํ ๋ฆฌ๋ฅผ ์์ฑํ๋ค.
์นดํ์นด ํด๋ฌ์คํฐ ID ์์ฑ
KAFKA_CLUSTER_ID="$(/opt/kafka/bin/kafka-storage.sh random-uuid)"
echo "KAFKA_CLUSTER_ID: $KAFKA_CLUSTER_ID"
๊ฐ ๋ธ๋ก์ปค์ ๋ํ ์ค์ ํ์ผ ์์ฑ ๋ฐ ์์
- ๊ฒฝ๋ก : /opt/kafka/config/kraft
- ํด๋น ๊ฒฝ๋ก์ server1.properties, server2.properties, server3.properties ๋ฅผ ์์ฑํ๋ค
# server1.properties
############################# Server Basics #############################
# The role of this server. Setting this puts us in KRaft mode
process.roles=broker,controller
# The node id associated with this instance's roles
node.id=1
# The connect string for the controller quorum
controller.quorum.voters=1@localhost:9095,2@localhost:9096,3@localhost:9097
############################# Socket Server Settings #############################
# The address the socket server listens on.
# Combined nodes (i.e. those with `process.roles=broker,controller`) must list the controller listener here at a minimum.
# If the broker listener is not defined, the default listener will use a host name that is equal to the value of java.net.InetAddress.getCanonicalHostName(),
# with PLAINTEXT listener name, and port 9092.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://:9092,CONTROLLER://:9095
# Name of listener used for communication between brokers.
inter.broker.listener.name=PLAINTEXT
# Listener name, hostname and port the broker will advertise to clients.
# If not set, it uses the value for "listeners".
advertised.listeners=PLAINTEXT://**{ec2์ public IP}**:9092
# A comma-separated list of the names of the listeners used by the controller.
# If no explicit mapping set in `listener.security.protocol.map`, default will be using PLAINTEXT protocol
# This is required if running in KRaft mode.
controller.listener.names=CONTROLLER
# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3
# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8
# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
############################# Log Basics #############################
# A comma separated list of directories under which to store log files
**log.dirs=/opt/kafka/logs/broker1**
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
**num.partitions=3 # ํํฐ์
์ ๊ฐ์๋ฅผ 3๊ฐ๋ก ํ์๋ค.**
# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1
############################# Internal Topic Settings #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=3
transaction.state.log.min.isr=2
############################# Log Flush Policy #############################
# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data may be lost if you are not using replication.
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000
# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000
############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=24
# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000
# server2.properties
############################# Server Basics #############################
# The role of this server. Setting this puts us in KRaft mode
process.roles=broker,controller
# The node id associated with this instance's roles
node.id=2
# The connect string for the controller quorum
controller.quorum.voters=1@localhost:9095,2@localhost:9096,3@localhost:9097
############################# Socket Server Settings #############################
# The address the socket server listens on.
# Combined nodes (i.e. those with `process.roles=broker,controller`) must list the controller listener here at a minimum.
# If the broker listener is not defined, the default listener will use a host name that is equal to the value of java.net.InetAddress.getCanonicalHostName(),
# with PLAINTEXT listener name, and port 9092.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://:9093,CONTROLLER://:9096
# Name of listener used for communication between brokers.
inter.broker.listener.name=PLAINTEXT
# Listener name, hostname and port the broker will advertise to clients.
# If not set, it uses the value for "listeners".
**advertised.listeners=PLAINTEXT://{ec2์ public IP}:9092**
# A comma-separated list of the names of the listeners used by the controller.
# If no explicit mapping set in `listener.security.protocol.map`, default will be using PLAINTEXT protocol
# This is required if running in KRaft mode.
controller.listener.names=CONTROLLER
# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3
# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8
# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
############################# Log Basics #############################
# A comma separated list of directories under which to store log files
log.dirs=/opt/kafka/logs/broker2
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=3
# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1
############################# Internal Topic Settings #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=3
transaction.state.log.min.isr=2
############################# Log Flush Policy #############################
# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data may be lost if you are not using replication.
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000
# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000
############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=24
# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000
- ๋ค์๊ณผ ๊ฐ์ด server3.properties์ ๋ํด์๋ ์์ฑํด์ค๋ค.
- ์ ์ํ๋ ์ปจํธ๋กค๋ฌ๋ 9092 ํฌํธ๋ก๋ง ์ ์ํ๋๋ก ํ์๋ค.
๊ฐ ๋ธ๋ก์ปค์ ๋ฐ์ดํฐ ๋๋ ํ ๋ฆฌ ์ด๊ธฐํ
/opt/kafka/bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c /opt/kafka/config/kraft/server1.properties
/opt/kafka/bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c /opt/kafka/config/kraft/server2.properties
/opt/kafka/bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c /opt/kafka/config/kraft/server3.properties
๋ฐฑ๊ทธ๋ผ์ด๋๋ก ์คํ
- java ๊ธฐ๋ฐ์ ๋๊ตฌ์ด๊ธฐ ๋๋ฌธ์ nohup ์ ํตํด ๋ฐฑ๊ทธ๋ผ์ด๋๋ก ์คํํ๋ค.
nohup /opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/kraft/server1.properties > /opt/kafka/logs/broker1.log 2>&1 &
nohup /opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/kraft/server2.properties > /opt/kafka/logs/broker2.log 2>&1 &
nohup /opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/kraft/server3.properties > /opt/kafka/logs/broker3.log 2>&1 &
SonarQube
- ์ด๋ฒ ์คํ๋ฆฐํธ ๋ง์ง๋ง์ฏค ๋ฉํ ๋๊ป์ ์๋ํ๋ธ๋ฅผ ํตํ ์ฝ๋ ํ์ง์ ํ์ธํด๋ณผ ๊ฒ์ ์ถ์ฒํ์ จ๋ค.
- ์๋ํ๋ธ๋ฅผ ๋ฐฐํฌํ ์ดํ ํ๋ฉด์ ๋ค์๊ณผ ๊ฐ๋ค.


- ์ด๋ ํ ๋ถ๋ถ์์ ๋ฌธ์ ๊ฐ ์๋์ง ํ์ธํ ์ ์์ด, ๋ค์ ์คํ๋ฆฐํธ์ ๋ฆฌํฉํ ๋ง์ ํ ์์ ์ด๋ค.
- ๋ํ ์์ฑ๋์ด์๋ test์ฝ๋๊ฐ ์์ผ๋ build -x test๋ก ํ ์คํธ์ฝ๋๋ฅผ ์ ์ธํ๊ณ ๋น๋๋ฅผ ํด ์ปค๋ฒ๋ฆฌ์ง๊ฐ ๋์ค์ง ์์ ์ด ๋ถ๋ถ๋ ์์ ํ ์์ ์ด๋ค!
๋์๋ฐ์ ๋ธ๋ก๊ทธ๋ค
'๐ฅํ๋ก์ ํธ๐ฅ' ์นดํ ๊ณ ๋ฆฌ์ ๋ค๋ฅธ ๊ธ
Trouble Shooting - ๊ตฌ๊ธ ๋ก๊ทธ์ธ, Swagger ๐ซ (1) | 2024.08.05 |
---|---|
~ 6/17 ํ์ด์ ํ๋ก์ ํธ ๊ณต๋ชจ์ sprint 1๐ฅ (0) | 2024.06.17 |
ํ์ด์ ICT ๊ณต๋ชจ์ ๊ฐ๋ฐ ์ผ์ง ( 5/30 ) (0) | 2024.05.30 |
์ด๋ฒ ์คํ๋ฆฐํธ ๊ธฐ๊ฐ๋์ ๊ฐ๋ฐํ ๋์ ์์
- ์นด์นด์ค, ๊ตฌ๊ธ ๋ก๊ทธ์ธ ( ๊ตฌ๊ธ์ ๋ณด์ ํ์ )
- ๋ก๊ทธ์์, ํ์ ํํด
- ์ผ๊ธฐ ์์ธ ํ์ด์ง API
- ์ฆ๊ฒจ์ฐพ๊ธฐ, ํ๊ฐ , ๊ฐ์ API
- ์นดํ์นด ๋ฐฐํฌ
- ์๋ํ๋ธ๋ฅผ ํตํ ์ฝ๋ํ์ง ํ์ธ
์นด์นด์ค , ๊ตฌ๊ธ ๋ก๊ทธ์ธ With Spring Security
- Spring Security ๋ฅผ ์ ์ฉํ๊ธฐ ์ํด Spring Security in Action ์ฑ ์ ์ฝ๊ณ ๊ฐ๋ฐ์ ํ์๋ค!
- ํ์ง๋ง ํด๋น ์ฑ ์ Spring Security 5.x.x ๋ฒ์ ์ผ๋ก ํ์ฌ ๋ด๊ฐ ๊ฐ๋ฐํ๊ณ ์๋ springboot 3.x.x ๋ฒ์ ์๋ ์๋ง์ง ์์๋ค.
- ์คํ๋ง ์ํ๋ฆฌํฐ์ ๊ธฐ๋ณธ์ ์ธ ๋ก์ง์ ๋ค์๊ณผ ๊ฐ๋ค.

- ์ธ์ฆ ํํฐ๋ฅผ ๊ฑฐ์น๊ณ ์ธ์ฆ ๊ด๋ฆฌ์๋ฅผ ๊ฑฐ์นํ , authenticatino provider์๊ฒ ์ฌ์ฉ์ ์ธ๋ถ ์ ๋ณด์ ์ํธ ์ธ์ฝ๋๋ฅผ ๋ฐ์์ ์ธ์ฆ์ ํ ์ดํ ํด๋น ์ ์ ์ ๋ํ ์ ๋ณด๋ฅผ ๋ณด์ ์ปจํ ์คํธ์ ์ ์ฅํ๋ค.
- ํ์ง๋ง ์ด๋ฒ ํ๋ก์ ํธ๋ ์ ํ๋ฆฌ์ผ์ด์ ๊ฐ๋ฐ์ด๊ธฐ ๋๋ฌธ์ ์ฌ์ฉ์์ ๋ณด๋ค ์ฌ์ด ๋ก๊ทธ์ธ์ ๋๊ธฐ ์ํด OAuth ์ JWT๋ฅผ ์ฌ์ฉํด ๋ก๊ทธ์ธ์ ํ ์ ์๋๋ก ํ์๋ค.

- ์ด๋ฒ ๊ฐ๋ฐ์์ ์ฌ์ฉํ ๋ก๊ทธ์ธ์ ํ๋ฆ๋๋ ๋ค์๊ณผ ๊ฐ๋ค.
- ๋ก๊ทธ์ธ์ ์์ฒญํ๋ฉด ํด๋ผ์ด์ธํธ๋ sns ๊ณ์ ์ ํตํด ๋ก๊ทธ์ธ์ ํ๋ค.
- ์ดํ ๋ก๊ทธ์ธ์ ํ๋ฉด ์ ๊ณต๋๋ ์น์ธ์ฝ๋๋ฅผ ํตํด ์์ ๊ณ์ ์ ํ ํฐ์ ๋ฐ๊ธ๋ฐ๋๋ค.
- ๋ฐ๊ธ ๋ฐ์ ํ ํฐ์ ํ ๋๋ก ํด๋น ์ ์ ์ ์ ๋ณด๋ฅผ ๊ฐ์ ธ์ ์ฐ๋ฆฌ์ ์๋น์ค์ ํ์์ ๊ฐ์ ์ํจ๋ค.
- ์ดํ ๊ฐ์ ๋ ์ ์ ์๊ฒ Jwt๋ฅผ ๋ฐ๊ธํ๋ค.
- ๋ฐ๊ธํ JWT์ ์ ์ ์ ๋ณด๋ฅผ spring security Context ์ ๋ฃ์ด ์ ๊ทผ๊ฐ๋ฅํ๊ฒ ํ๋๋ก ํ๋ค.
- ์ดํ spring security ์ filterchain์ ํตํด ๋ค์ด์ค๋ ํต์ ์ ๋ํด jwt๊ฐ ์ฌ๋ฐ๋ฅด๋ค๋ฉด ์ฑ์ ์ ์์ ํ๊ฐํ๋ค.
- ์ฌ๊ธฐ์ filterchain์ JwtAuthenticationFilter๋ฅผ usernamePasswordAuthenticationFilter ์์ ๋ฐฐ์นํด ํํฐ์ฒด์ธ์ด ์ ์ฉ๋๋๋ก ํ์๋ค.
์คํ๋ง ์ํ๋ฆฌํฐ๋ฅผ ์ ์ฉํ๊ณ ๋๋ ๋ฌธ์ ์
- ์คํ๋ง ์ํ๋ฆฌํฐ๋ง์ ์ ์ฉํ๊ณ ๋๋, ์๋ฌ๊ฐ ๋๋ฉด 403 ์๋ฌ๋ก ๋ฌด์กฐ๊ฑด ๋น ์ ธ๋ฒ๋ ธ๋ค.
- ์คํ๋ง ๊ณต์ ๋ธ๋ก๊ทธ์ ๋ฐ๋ฅด๋ฉด, ์คํ๋ง๋ถํธ์์๋ ์๋ฌ๊ฐ ๋ฐ์ํ๋ฉด /error๋ผ๋ URI๋ก ๋งคํ์ ์๋ํ๋ค. ์ค์ ๋ก ํด๋น URI๋ก ์ด๋ํ๋ฉด ์๋์ ๊ฐ์ ํ์ด์ง๊ฐ ๋ํ๋๋ค.
- Whitelabel Error Page ์์ฒด๋ 403 ์๋ฌ์ ๊ด๋ จ์ด ์์ง๋ง ์๋ฌ๊ฐ ๋ฐ์ํ๋ฉด /error๋ก ๋งคํ์ ์๋ํ๋ค๋ ๊ฒ์ด ํต์ฌ์ด๋ค.
- ํ์ง๋ง ์ฐ๋ฆฌ๋ /error ์๋ํฌ์ธํธ์ ๋ํด์ ํ๊ฐํด์ฃผ์ง ์์๊ธฐ ๋๋ฌธ์ ์๋ฌํ์ด์ง๋ก ์ด๋ํ ๋ ํ ํฐ์ด ์์ด 403 ์๋ฌ๊ฐ ๋๋ฒ๋ ธ๋ ๊ฒ์ด๋ค.
- ์ดํ ํ ํฐ์ ๋ํ ์๋ฌ์ฝ๋๋ค์ ์์ฑํ ์ดํ jwtAuthenticationFilter ์์ entryPoint์ ๋ํ ํํฐ์ฒด์ธ์ ๊ฑธ์ด์ฃผ์๋๋ ํ ํฐ์ ๋ํ ์๋ฌ๋ค๋ ํ์ธํ ์ ์์๋ค.
Resolver
- JWT ํ ํฐ์ ๋ณด๋ผ๋ ๋ง๋ค ํด๋น ํ ํฐ์์ ์ ์ id์ role์ ๊บผ๋ด๊ณ ์ถ์๋ค.
@Target(ElementType.PARAMETER)
@Retention(RetentionPolicy.RUNTIME)// ๋ฐํ์๋์ ์ ์ง
@Parameter(hidden = true)// swagger์์ ๋ณด์ด์ง ์๊ฒ ์ค์
public @interface AuthUser {
}
- ๋ค์๊ณผ ๊ฐ์ ์ด๋ ธํ ์ด์ ์ธํฐํ์ด์ค๋ฅผ ๋ง๋ค๊ณ ,
@Component
public class AuthUserArgumentResolver implements HandlerMethodArgumentResolver {
public AuthUserArgumentResolver() {
}
@Override
public boolean supportsParameter(MethodParameter parameter) {
return parameter.getParameterType().equals(JwtTokenInfo.class) &&
parameter.hasParameterAnnotation(AuthUser.class); // ์ง์ ํ๋ผ๋ฏธํฐ ํ์
}
@Override
public Object resolveArgument(MethodParameter parameter, ModelAndViewContainer mavContainer,
NativeWebRequest webRequest, WebDataBinderFactory binderFactory) {
Authentication authentication = SecurityContextHolder.getContext().getAuthentication();
Claims claims = (Claims) authentication.getPrincipal();
Long userId = Long.parseLong((String) claims.get(JwtProperties.USER_ID));
UserRole userRole = UserRole.valueOf((String) claims.get(JwtProperties.USER_ROLE));
return JwtTokenInfo.builder()
.userId(userId)
.userRole(userRole)
.build();
}
}
- resolver๋ฅผ ์ฌ์ฉํด @auth ์ด๋ ธํ ์ด์ ์ผ๋ก ๋ค์ด์ค๋ jwt์ ๋ํด ํ ํฐ์ id์ role์ ๊บผ๋ผ ์ ์๋๋ก ํ์๋ค.
@Configuration
@RequiredArgsConstructor
public class WebConfig implements WebMvcConfigurer {
private final AuthUserArgumentResolver authUserArgumentResolver;
@Override
public void addArgumentResolvers(List<HandlerMethodArgumentResolver> resolvers) {
resolvers.add(authUserArgumentResolver);
}
}
- ์ด resolver๋ฅผ webMvcConfigure์ ์ถ๊ฐํ์ฌ ์ฌ์ฉํ ์ ์๋๋ก ํ์๋ค.
Kafka ์ค์น ( kafka- kraft ๋ชจ๋ )
- ์ง๋ ์คํ๋ฆฐํธ ๊น์ง๋ ์นดํ์นด๋ฅผ ๋ก์ปฌ์์ ๋๋ ค์ ํ ์คํธ์ฉ์ผ๋ก ์ฌ์ฉ์ ํ์๋ค.
- ์ด๋ฒ ์คํ๋ฆฐํธ์์ kafka๋ฅผ ์ค์นํด ์ค์ ai ์๋ฒ์ ๋ฉ์ธ์ง๋ฅผ ์ฃผ๊ณ ๋ฐ๊ณ ํ๋ฉด์ ๊ทธ๋ฆผ์ผ๊ธฐ ์์ฑ ๋ก์ง์ ์์ฑํ๊ณ ์ ํ์๋ค.
- ๊ธฐ์กด์ kafka๋ zookeeper์ ๊ฐ์ด ์ฌ์ฉํ๋ ํด์ด์๋ค. ์ฃผํคํผ๋ ์นดํ์นด์ ๋ฉํ์ ๋ณด๋ฅผ ๊ด๋ฆฌํด์ฃผ์๋ค.
- ํ์ง๋ง 2.7 ๋ฒ์ ๋ถํฐ ์ฃผํคํผ์ ์์กด์ฑ์ ์ ๊ฑฐํ๊ณ ์ ํ์๊ณ , 3.5.x ๋ฒ์ ๋ถํฐ ์์กด์ฑ์ ์ ๊ฑฐํ Kraft ๋ชจ๋๋ฅผ ๋๋ค.
- 4.x.x ๋ฒ์ ๋ถํฐ๋ ์ฃผํคํผ์ ์์กด์ฑ์ ์์ ์ ๊ฑฐํ๋ค๊ณ ํ๋ค.
์ค์น ๋ฐฉ๋ฒ
- ์๋ ์์ฉ ํ๊ฒฝ์์๋ ์ธ์คํด์ค๋ฅผ ์ฌ๋ฌ๊ฐ ์ฌ์ฉํ์ฌ, ์ปจํธ๋กค๋ฌ 3๊ฐ, ๋ธ๋ก์ปค 3๊ฐ๋ฅผ ์ฌ์ฉํ์ฌ ์ด์ํ์ง๋ง, ๋น์ฉ์ ๋ฌธ์ ์์ผ๋ก ํ๋์ ์ธ์คํด์ค์ ๋ธ๋ก์ปค์ ์ปจํธ๋กค๋ฌ๋ฅผ ๋๋ค ์ฌ์ฉํ๋ 3๊ฐ์ ๋ธ๋ก์ปค๋ฅผ ์ฌ์ฉํ๋ค.
- ํฌํธ๋ ๋ค ๋ค๋ฅด๊ฒ ์ฌ์ฉํ๋ค.
- ๋ธ๋ก์ปค 1 : 9092, ๋ธ๋ก์ปค 2 : 9093 , ๋ธ๋ก์ปค 3 : 9094
- ์ปจํธ๋กค๋ฌ 1: 9095, ์ปจํธ๋กค๋ฌ 2: 9096, ์ปจํธ๋กค๋ฌ 3: 9097
java์ค์น
sudo apt update
sudo apt upgrade
sudo apt install openjdk-17-jdk -y
kafka ec2์ ์ค์น
wget https://downloads.apache.org/kafka/3.7.1/kafka_2.13-3.7.1.tgz
tar -xzf kafka_2.13-3.7.1.tgz
sudo mv kafka_2.13-3.7.1 /opt/kafka
mkdir -p /opt/kafka/logs/broker{1,2,3}
- ์นดํ์นด๋ฅผ ๋ค์ด๋ฐ๋๋ค.
- ์์ถ ํด์
- ์นดํ์นด ํ์ผ์ ์ฎ๊น
- ์นดํ์นด ๋ก๊ทธ๋ฅผ ์ฐ์ ๋๋ ํ ๋ฆฌ๋ฅผ ์์ฑํ๋ค.
์นดํ์นด ํด๋ฌ์คํฐ ID ์์ฑ
KAFKA_CLUSTER_ID="$(/opt/kafka/bin/kafka-storage.sh random-uuid)"
echo "KAFKA_CLUSTER_ID: $KAFKA_CLUSTER_ID"
๊ฐ ๋ธ๋ก์ปค์ ๋ํ ์ค์ ํ์ผ ์์ฑ ๋ฐ ์์
- ๊ฒฝ๋ก : /opt/kafka/config/kraft
- ํด๋น ๊ฒฝ๋ก์ server1.properties, server2.properties, server3.properties ๋ฅผ ์์ฑํ๋ค
# server1.properties
############################# Server Basics #############################
# The role of this server. Setting this puts us in KRaft mode
process.roles=broker,controller
# The node id associated with this instance's roles
node.id=1
# The connect string for the controller quorum
controller.quorum.voters=1@localhost:9095,2@localhost:9096,3@localhost:9097
############################# Socket Server Settings #############################
# The address the socket server listens on.
# Combined nodes (i.e. those with `process.roles=broker,controller`) must list the controller listener here at a minimum.
# If the broker listener is not defined, the default listener will use a host name that is equal to the value of java.net.InetAddress.getCanonicalHostName(),
# with PLAINTEXT listener name, and port 9092.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://:9092,CONTROLLER://:9095
# Name of listener used for communication between brokers.
inter.broker.listener.name=PLAINTEXT
# Listener name, hostname and port the broker will advertise to clients.
# If not set, it uses the value for "listeners".
advertised.listeners=PLAINTEXT://**{ec2์ public IP}**:9092
# A comma-separated list of the names of the listeners used by the controller.
# If no explicit mapping set in `listener.security.protocol.map`, default will be using PLAINTEXT protocol
# This is required if running in KRaft mode.
controller.listener.names=CONTROLLER
# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3
# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8
# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
############################# Log Basics #############################
# A comma separated list of directories under which to store log files
**log.dirs=/opt/kafka/logs/broker1**
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
**num.partitions=3 # ํํฐ์
์ ๊ฐ์๋ฅผ 3๊ฐ๋ก ํ์๋ค.**
# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1
############################# Internal Topic Settings #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=3
transaction.state.log.min.isr=2
############################# Log Flush Policy #############################
# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data may be lost if you are not using replication.
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000
# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000
############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=24
# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000
# server2.properties
############################# Server Basics #############################
# The role of this server. Setting this puts us in KRaft mode
process.roles=broker,controller
# The node id associated with this instance's roles
node.id=2
# The connect string for the controller quorum
controller.quorum.voters=1@localhost:9095,2@localhost:9096,3@localhost:9097
############################# Socket Server Settings #############################
# The address the socket server listens on.
# Combined nodes (i.e. those with `process.roles=broker,controller`) must list the controller listener here at a minimum.
# If the broker listener is not defined, the default listener will use a host name that is equal to the value of java.net.InetAddress.getCanonicalHostName(),
# with PLAINTEXT listener name, and port 9092.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://:9093,CONTROLLER://:9096
# Name of listener used for communication between brokers.
inter.broker.listener.name=PLAINTEXT
# Listener name, hostname and port the broker will advertise to clients.
# If not set, it uses the value for "listeners".
**advertised.listeners=PLAINTEXT://{ec2์ public IP}:9092**
# A comma-separated list of the names of the listeners used by the controller.
# If no explicit mapping set in `listener.security.protocol.map`, default will be using PLAINTEXT protocol
# This is required if running in KRaft mode.
controller.listener.names=CONTROLLER
# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3
# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8
# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
############################# Log Basics #############################
# A comma separated list of directories under which to store log files
log.dirs=/opt/kafka/logs/broker2
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=3
# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1
############################# Internal Topic Settings #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=3
transaction.state.log.min.isr=2
############################# Log Flush Policy #############################
# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data may be lost if you are not using replication.
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000
# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000
############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=24
# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000
- ๋ค์๊ณผ ๊ฐ์ด server3.properties์ ๋ํด์๋ ์์ฑํด์ค๋ค.
- ์ ์ํ๋ ์ปจํธ๋กค๋ฌ๋ 9092 ํฌํธ๋ก๋ง ์ ์ํ๋๋ก ํ์๋ค.
๊ฐ ๋ธ๋ก์ปค์ ๋ฐ์ดํฐ ๋๋ ํ ๋ฆฌ ์ด๊ธฐํ
/opt/kafka/bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c /opt/kafka/config/kraft/server1.properties
/opt/kafka/bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c /opt/kafka/config/kraft/server2.properties
/opt/kafka/bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c /opt/kafka/config/kraft/server3.properties
๋ฐฑ๊ทธ๋ผ์ด๋๋ก ์คํ
- java ๊ธฐ๋ฐ์ ๋๊ตฌ์ด๊ธฐ ๋๋ฌธ์ nohup ์ ํตํด ๋ฐฑ๊ทธ๋ผ์ด๋๋ก ์คํํ๋ค.
nohup /opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/kraft/server1.properties > /opt/kafka/logs/broker1.log 2>&1 &
nohup /opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/kraft/server2.properties > /opt/kafka/logs/broker2.log 2>&1 &
nohup /opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/kraft/server3.properties > /opt/kafka/logs/broker3.log 2>&1 &
SonarQube
- ์ด๋ฒ ์คํ๋ฆฐํธ ๋ง์ง๋ง์ฏค ๋ฉํ ๋๊ป์ ์๋ํ๋ธ๋ฅผ ํตํ ์ฝ๋ ํ์ง์ ํ์ธํด๋ณผ ๊ฒ์ ์ถ์ฒํ์ จ๋ค.
- ์๋ํ๋ธ๋ฅผ ๋ฐฐํฌํ ์ดํ ํ๋ฉด์ ๋ค์๊ณผ ๊ฐ๋ค.


- ์ด๋ ํ ๋ถ๋ถ์์ ๋ฌธ์ ๊ฐ ์๋์ง ํ์ธํ ์ ์์ด, ๋ค์ ์คํ๋ฆฐํธ์ ๋ฆฌํฉํ ๋ง์ ํ ์์ ์ด๋ค.
- ๋ํ ์์ฑ๋์ด์๋ test์ฝ๋๊ฐ ์์ผ๋ build -x test๋ก ํ ์คํธ์ฝ๋๋ฅผ ์ ์ธํ๊ณ ๋น๋๋ฅผ ํด ์ปค๋ฒ๋ฆฌ์ง๊ฐ ๋์ค์ง ์์ ์ด ๋ถ๋ถ๋ ์์ ํ ์์ ์ด๋ค!
๋์๋ฐ์ ๋ธ๋ก๊ทธ๋ค
'๐ฅํ๋ก์ ํธ๐ฅ' ์นดํ ๊ณ ๋ฆฌ์ ๋ค๋ฅธ ๊ธ
Trouble Shooting - ๊ตฌ๊ธ ๋ก๊ทธ์ธ, Swagger ๐ซ (1) | 2024.08.05 |
---|---|
~ 6/17 ํ์ด์ ํ๋ก์ ํธ ๊ณต๋ชจ์ sprint 1๐ฅ (0) | 2024.06.17 |
ํ์ด์ ICT ๊ณต๋ชจ์ ๊ฐ๋ฐ ์ผ์ง ( 5/30 ) (0) | 2024.05.30 |