๐Ÿ”ฅํ”„๋กœ์ ํŠธ๐Ÿ”ฅ

~ 7/29 ํ•œ์ด์Œ ํ”„๋กœ์ ํŠธ ๊ณต๋ชจ์ „ sprint 2๐Ÿ”ฅ

jmboy 2024. 8. 5. 14:26

์ด๋ฒˆ ์Šคํ”„๋ฆฐํŠธ ๊ธฐ๊ฐ„๋™์•ˆ ๊ฐœ๋ฐœํ•œ ๋‚˜์˜ ์ž‘์—…

  1. ์นด์นด์˜ค, ๊ตฌ๊ธ€ ๋กœ๊ทธ์ธ ( ๊ตฌ๊ธ€์€ ๋ณด์™„ ํ•„์š” )
  2. ๋กœ๊ทธ์•„์›ƒ, ํšŒ์› ํƒˆํ‡ด
  3. ์ผ๊ธฐ ์ƒ์„ธ ํŽ˜์ด์ง€ API
  4. ์ฆ๊ฒจ์ฐพ๊ธฐ, ํ™”๊ฐ€ , ๊ฐ์ • API
  5. ์นดํ”„์นด ๋ฐฐํฌ
  6. ์†Œ๋‚˜ํ๋ธŒ๋ฅผ ํ†ตํ•œ ์ฝ”๋“œํ’ˆ์งˆ ํ™•์ธ

์นด์นด์˜ค , ๊ตฌ๊ธ€ ๋กœ๊ทธ์ธ With Spring Security

  • Spring Security ๋ฅผ ์ ์šฉํ•˜๊ธฐ ์œ„ํ•ด Spring Security in Action ์ฑ…์„ ์ฝ๊ณ  ๊ฐœ๋ฐœ์„ ํ•˜์˜€๋‹ค!
  • ํ•˜์ง€๋งŒ ํ•ด๋‹น ์ฑ…์€ Spring Security 5.x.x ๋ฒ„์ „์œผ๋กœ ํ˜„์žฌ ๋‚ด๊ฐ€ ๊ฐœ๋ฐœํ•˜๊ณ  ์žˆ๋Š” springboot 3.x.x ๋ฒ„์ „์—๋Š” ์•Œ๋งž์ง€ ์•Š์•˜๋‹ค.
  • ์Šคํ”„๋ง ์‹œํ๋ฆฌํ‹ฐ์˜ ๊ธฐ๋ณธ์ ์ธ ๋กœ์ง์€ ๋‹ค์Œ๊ณผ ๊ฐ™๋‹ค.

spring security in action

  • ์ธ์ฆ ํ•„ํ„ฐ๋ฅผ ๊ฑฐ์น˜๊ณ  ์ธ์ฆ ๊ด€๋ฆฌ์ž๋ฅผ ๊ฑฐ์นœํ›„ , authenticatino provider์—๊ฒŒ ์‚ฌ์šฉ์ž ์„ธ๋ถ€ ์ •๋ณด์™€ ์•”ํ˜ธ ์ธ์ฝ”๋”๋ฅผ ๋ฐ›์•„์„œ ์ธ์ฆ์„ ํ•œ ์ดํ›„ ํ•ด๋‹น ์œ ์ €์— ๋Œ€ํ•œ ์ •๋ณด๋ฅผ ๋ณด์•ˆ ์ปจํ…์ŠคํŠธ์— ์ €์žฅํ•œ๋‹ค.
  • ํ•˜์ง€๋งŒ ์ด๋ฒˆ ํ”„๋กœ์ ํŠธ๋Š” ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜ ๊ฐœ๋ฐœ์ด๊ธฐ ๋•Œ๋ฌธ์— ์‚ฌ์šฉ์ž์˜ ๋ณด๋‹ค ์‰ฌ์šด ๋กœ๊ทธ์ธ์„ ๋•๊ธฐ ์œ„ํ•ด OAuth ์™€ JWT๋ฅผ ์‚ฌ์šฉํ•ด ๋กœ๊ทธ์ธ์„ ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•˜์˜€๋‹ค.

์ธ์ฆ ๋กœ์ง

  • ์ด๋ฒˆ ๊ฐœ๋ฐœ์—์„œ ์‚ฌ์šฉํ•œ ๋กœ๊ทธ์ธ์˜ ํ๋ฆ„๋„๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™๋‹ค.
  • ๋กœ๊ทธ์ธ์„ ์š”์ฒญํ•˜๋ฉด ํด๋ผ์ด์–ธํŠธ๋Š” sns ๊ณ„์ •์„ ํ†ตํ•ด ๋กœ๊ทธ์ธ์„ ํ•œ๋‹ค.
  • ์ดํ›„ ๋กœ๊ทธ์ธ์„ ํ•˜๋ฉด ์ œ๊ณต๋˜๋Š” ์Šน์ธ์ฝ”๋“œ๋ฅผ ํ†ตํ•ด ์†Œ์…œ ๊ณ„์ •์˜ ํ† ํฐ์„ ๋ฐœ๊ธ‰๋ฐ›๋Š”๋‹ค.
  • ๋ฐœ๊ธ‰ ๋ฐ›์€ ํ† ํฐ์„ ํ† ๋Œ€๋กœ ํ•ด๋‹น ์œ ์ €์˜ ์ •๋ณด๋ฅผ ๊ฐ€์ ธ์™€ ์šฐ๋ฆฌ์˜ ์„œ๋น„์Šค์˜ ํšŒ์›์— ๊ฐ€์ž…์‹œํ‚จ๋‹ค.
  • ์ดํ›„ ๊ฐ€์ž…๋œ ์œ ์ €์—๊ฒŒ Jwt๋ฅผ ๋ฐœ๊ธ‰ํ•œ๋‹ค.
  • ๋ฐœ๊ธ‰ํ•œ JWT์˜ ์œ ์ €์ •๋ณด๋ฅผ spring security Context ์— ๋„ฃ์–ด ์ ‘๊ทผ๊ฐ€๋Šฅํ•˜๊ฒŒ ํ•˜๋„๋ก ํ•œ๋‹ค.
  • ์ดํ›„ spring security ์˜ filterchain์„ ํ†ตํ•ด ๋“ค์–ด์˜ค๋Š” ํ†ต์‹ ์— ๋Œ€ํ•ด jwt๊ฐ€ ์˜ฌ๋ฐ”๋ฅด๋‹ค๋ฉด ์•ฑ์˜ ์ ‘์†์„ ํ—ˆ๊ฐ€ํ•œ๋‹ค.
    • ์—ฌ๊ธฐ์„œ filterchain์„ JwtAuthenticationFilter๋ฅผ usernamePasswordAuthenticationFilter ์•ž์— ๋ฐฐ์น˜ํ•ด ํ•„ํ„ฐ์ฒด์ธ์ด ์ ์šฉ๋˜๋„๋ก ํ•˜์˜€๋‹ค.

์Šคํ”„๋ง ์‹œํ๋ฆฌํ‹ฐ๋ฅผ ์ ์šฉํ•˜๊ณ  ๋‚˜๋‹ˆ ๋ฌธ์ œ์ 

  • ์Šคํ”„๋ง ์‹œํ๋ฆฌํ‹ฐ๋งŒ์„ ์ ์šฉํ•˜๊ณ  ๋‚˜๋‹ˆ, ์—๋Ÿฌ๊ฐ€ ๋‚˜๋ฉด 403 ์—๋Ÿฌ๋กœ ๋ฌด์กฐ๊ฑด ๋น ์ ธ๋ฒ„๋ ธ๋‹ค.
  • ์Šคํ”„๋ง ๊ณต์‹ ๋ธ”๋กœ๊ทธ์— ๋”ฐ๋ฅด๋ฉด, ์Šคํ”„๋ง๋ถ€ํŠธ์—์„œ๋Š” ์—๋Ÿฌ๊ฐ€ ๋ฐœ์ƒํ•˜๋ฉด /error๋ผ๋Š” URI๋กœ ๋งคํ•‘์„ ์‹œ๋„ํ•œ๋‹ค. ์‹ค์ œ๋กœ ํ•ด๋‹น URI๋กœ ์ด๋™ํ•˜๋ฉด ์•„๋ž˜์™€ ๊ฐ™์€ ํŽ˜์ด์ง€๊ฐ€ ๋‚˜ํƒ€๋‚œ๋‹ค.
  • Whitelabel Error Page ์ž์ฒด๋Š” 403 ์—๋Ÿฌ์™€ ๊ด€๋ จ์ด ์—†์ง€๋งŒ ์—๋Ÿฌ๊ฐ€ ๋ฐœ์ƒํ•˜๋ฉด /error๋กœ ๋งคํ•‘์„ ์‹œ๋„ํ•œ๋‹ค๋Š” ๊ฒƒ์ด ํ•ต์‹ฌ์ด๋‹ค.
  • ํ•˜์ง€๋งŒ ์šฐ๋ฆฌ๋Š” /error ์—”๋“œํฌ์ธํŠธ์— ๋Œ€ํ•ด์„œ ํ—ˆ๊ฐ€ํ•ด์ฃผ์ง€ ์•Š์•˜๊ธฐ ๋•Œ๋ฌธ์— ์—๋ŸฌํŽ˜์ด์ง€๋กœ ์ด๋™ํ•  ๋•Œ ํ† ํฐ์ด ์—†์–ด 403 ์—๋Ÿฌ๊ฐ€ ๋‚˜๋ฒ„๋ ธ๋˜ ๊ฒƒ์ด๋‹ค.
  • ์ดํ›„ ํ† ํฐ์— ๋Œ€ํ•œ ์—๋Ÿฌ์ฝ”๋“œ๋“ค์„ ์ž‘์„ฑํ•œ ์ดํ›„ jwtAuthenticationFilter ์•ž์— entryPoint์— ๋Œ€ํ•œ ํ•„ํ„ฐ์ฒด์ธ์— ๊ฑธ์–ด์ฃผ์—ˆ๋”๋‹ˆ ํ† ํฐ์— ๋Œ€ํ•œ ์—๋Ÿฌ๋“ค๋„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค.

Resolver

  • JWT ํ† ํฐ์„ ๋ณด๋‚ผ๋•Œ ๋งˆ๋‹ค ํ•ด๋‹น ํ† ํฐ์—์„œ ์œ ์ € id์™€ role์„ ๊บผ๋‚ด๊ณ  ์‹ถ์—ˆ๋‹ค.
@Target(ElementType.PARAMETER)
@Retention(RetentionPolicy.RUNTIME)// ๋Ÿฐํƒ€์ž„๋™์•ˆ ์œ ์ง€
@Parameter(hidden = true)// swagger์—์„œ ๋ณด์ด์ง€ ์•Š๊ฒŒ ์„ค์ •
public @interface AuthUser {
}
  • ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์–ด๋…ธํ…Œ์ด์…˜ ์ธํ„ฐํŽ˜์ด์Šค๋ฅผ ๋งŒ๋“ค๊ณ ,
@Component
public class AuthUserArgumentResolver implements HandlerMethodArgumentResolver {

    public AuthUserArgumentResolver() {
    }

    @Override
    public boolean supportsParameter(MethodParameter parameter) {
        return parameter.getParameterType().equals(JwtTokenInfo.class) &&
                parameter.hasParameterAnnotation(AuthUser.class); // ์ง€์› ํŒŒ๋ผ๋ฏธํ„ฐ ํƒ€์ž…
    }

    @Override
    public Object resolveArgument(MethodParameter parameter, ModelAndViewContainer mavContainer,
        NativeWebRequest webRequest, WebDataBinderFactory binderFactory) {

        Authentication authentication = SecurityContextHolder.getContext().getAuthentication();
        Claims claims = (Claims) authentication.getPrincipal();
        Long userId = Long.parseLong((String) claims.get(JwtProperties.USER_ID));
        UserRole userRole = UserRole.valueOf((String) claims.get(JwtProperties.USER_ROLE));

        return JwtTokenInfo.builder()
                .userId(userId)
                .userRole(userRole)
                .build();
    }
}
  • resolver๋ฅผ ์‚ฌ์šฉํ•ด @auth ์–ด๋…ธํ…Œ์ด์…˜์œผ๋กœ ๋“ค์–ด์˜ค๋Š” jwt์— ๋Œ€ํ•ด ํ† ํฐ์˜ id์™€ role์„ ๊บผ๋‚ผ ์ˆ˜ ์žˆ๋„๋ก ํ•˜์˜€๋‹ค.
@Configuration
@RequiredArgsConstructor
public class WebConfig  implements WebMvcConfigurer {

    private final AuthUserArgumentResolver authUserArgumentResolver;

    @Override
    public void addArgumentResolvers(List<HandlerMethodArgumentResolver> resolvers) {
        resolvers.add(authUserArgumentResolver);
    }
}
  • ์ด resolver๋ฅผ webMvcConfigure์— ์ถ”๊ฐ€ํ•˜์—ฌ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•˜์˜€๋‹ค.

Kafka ์„ค์น˜ ( kafka- kraft ๋ชจ๋“œ )

  • ์ง€๋‚œ ์Šคํ”„๋ฆฐํŠธ ๊นŒ์ง€๋Š” ์นดํ”„์นด๋ฅผ ๋กœ์ปฌ์—์„œ ๋Œ๋ ค์„œ ํ…Œ์ŠคํŠธ์šฉ์œผ๋กœ ์‚ฌ์šฉ์„ ํ•˜์˜€๋‹ค.
  • ์ด๋ฒˆ ์Šคํ”„๋ฆฐํŠธ์—์„œ kafka๋ฅผ ์„ค์น˜ํ•ด ์‹ค์ œ ai ์„œ๋ฒ„์™€ ๋ฉ”์„ธ์ง€๋ฅผ ์ฃผ๊ณ ๋ฐ›๊ณ  ํ•˜๋ฉด์„œ ๊ทธ๋ฆผ์ผ๊ธฐ ์ƒ์„ฑ ๋กœ์ง์„ ์™„์„ฑํ•˜๊ณ ์ž ํ•˜์˜€๋‹ค.
  • ๊ธฐ์กด์˜ kafka๋Š” zookeeper์™€ ๊ฐ™์ด ์‚ฌ์šฉํ•˜๋Š” ํˆด์ด์—ˆ๋‹ค. ์ฃผํ‚คํผ๋Š” ์นดํ”„์นด์˜ ๋ฉ”ํƒ€์ •๋ณด๋ฅผ ๊ด€๋ฆฌํ•ด์ฃผ์—ˆ๋‹ค.
  • ํ•˜์ง€๋งŒ 2.7 ๋ฒ„์ „๋ถ€ํ„ฐ ์ฃผํ‚คํผ์˜ ์˜์กด์„ฑ์„ ์ œ๊ฑฐํ•˜๊ณ ์ž ํ•˜์˜€๊ณ , 3.5.x ๋ฒ„์ „๋ถ€ํ„ฐ ์˜์กด์„ฑ์„ ์ œ๊ฑฐํ•œ Kraft ๋ชจ๋“œ๋ฅผ ๋ƒˆ๋‹ค.
  • 4.x.x ๋ฒ„์ „๋ถ€ํ„ฐ๋Š” ์ฃผํ‚คํผ์˜ ์˜์กด์„ฑ์„ ์•„์˜ˆ ์ œ๊ฑฐํ•œ๋‹ค๊ณ  ํ•œ๋‹ค.

์„ค์น˜ ๋ฐฉ๋ฒ•

  • ์›๋ž˜ ์ƒ์šฉ ํ™˜๊ฒฝ์—์„œ๋Š” ์ธ์Šคํ„ด์Šค๋ฅผ ์—ฌ๋Ÿฌ๊ฐœ ์‚ฌ์šฉํ•˜์—ฌ, ์ปจํŠธ๋กค๋Ÿฌ 3๊ฐœ, ๋ธŒ๋กœ์ปค 3๊ฐœ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์šด์˜ํ•˜์ง€๋งŒ, ๋น„์šฉ์˜ ๋ฌธ์ œ์ƒ์œผ๋กœ ํ•˜๋‚˜์˜ ์ธ์Šคํ„ด์Šค์— ๋ธŒ๋กœ์ปค์™€ ์ปจํŠธ๋กค๋Ÿฌ๋ฅผ ๋‘˜๋‹ค ์‚ฌ์šฉํ•˜๋Š” 3๊ฐœ์˜ ๋ธŒ๋กœ์ปค๋ฅผ ์‚ฌ์šฉํ•œ๋‹ค.
  • ํฌํŠธ๋Š” ๋‹ค ๋‹ค๋ฅด๊ฒŒ ์‚ฌ์šฉํ–ˆ๋‹ค.
  • ๋ธŒ๋กœ์ปค 1 : 9092, ๋ธŒ๋กœ์ปค 2 : 9093 , ๋ธŒ๋กœ์ปค 3 : 9094
  • ์ปจํŠธ๋กค๋Ÿฌ 1: 9095, ์ปจํŠธ๋กค๋Ÿฌ 2: 9096, ์ปจํŠธ๋กค๋Ÿฌ 3: 9097

java์„ค์น˜

sudo apt update
sudo apt upgrade
sudo apt install openjdk-17-jdk -y

kafka ec2์— ์„ค์น˜

wget https://downloads.apache.org/kafka/3.7.1/kafka_2.13-3.7.1.tgz
tar -xzf kafka_2.13-3.7.1.tgz 
sudo mv kafka_2.13-3.7.1 /opt/kafka
mkdir -p /opt/kafka/logs/broker{1,2,3}
  • ์นดํ”„์นด๋ฅผ ๋‹ค์šด๋ฐ›๋Š”๋‹ค.
  • ์••์ถ• ํ•ด์ œ
  • ์นดํ”„์นด ํŒŒ์ผ์„ ์˜ฎ๊น€
  • ์นดํ”„์นด ๋กœ๊ทธ๋ฅผ ์ฐ์„ ๋””๋ ‰ํ† ๋ฆฌ๋ฅผ ์ƒ์„ฑํ•œ๋‹ค.

์นดํ”„์นด ํด๋Ÿฌ์Šคํ„ฐ ID ์ƒ์„ฑ

KAFKA_CLUSTER_ID="$(/opt/kafka/bin/kafka-storage.sh random-uuid)"
echo "KAFKA_CLUSTER_ID: $KAFKA_CLUSTER_ID"

๊ฐ ๋ธŒ๋กœ์ปค์— ๋Œ€ํ•œ ์„ค์ • ํŒŒ์ผ ์ƒ์„ฑ ๋ฐ ์ˆ˜์ •

  • ๊ฒฝ๋กœ : /opt/kafka/config/kraft
  • ํ•ด๋‹น ๊ฒฝ๋กœ์— server1.properties, server2.properties, server3.properties ๋ฅผ ์ƒ์„ฑํ•œ๋‹ค
# server1.properties
############################# Server Basics #############################

# The role of this server. Setting this puts us in KRaft mode
process.roles=broker,controller

# The node id associated with this instance's roles
node.id=1

# The connect string for the controller quorum
controller.quorum.voters=1@localhost:9095,2@localhost:9096,3@localhost:9097

############################# Socket Server Settings #############################

# The address the socket server listens on.
# Combined nodes (i.e. those with `process.roles=broker,controller`) must list the controller listener here at a minimum.
# If the broker listener is not defined, the default listener will use a host name that is equal to the value of java.net.InetAddress.getCanonicalHostName(),
# with PLAINTEXT listener name, and port 9092.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://:9092,CONTROLLER://:9095

# Name of listener used for communication between brokers.
inter.broker.listener.name=PLAINTEXT

# Listener name, hostname and port the broker will advertise to clients.
# If not set, it uses the value for "listeners".
advertised.listeners=PLAINTEXT://**{ec2์˜ public IP}**:9092

# A comma-separated list of the names of the listeners used by the controller.
# If no explicit mapping set in `listener.security.protocol.map`, default will be using PLAINTEXT protocol
# This is required if running in KRaft mode.
controller.listener.names=CONTROLLER

# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3

# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600

############################# Log Basics #############################

# A comma separated list of directories under which to store log files
**log.dirs=/opt/kafka/logs/broker1**

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
**num.partitions=3 # ํŒŒํ‹ฐ์…˜์˜ ๊ฐœ์ˆ˜๋ฅผ 3๊ฐœ๋กœ ํ•˜์˜€๋‹ค.**

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=3
transaction.state.log.min.isr=2

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=24

# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000
# server2.properties
############################# Server Basics #############################

# The role of this server. Setting this puts us in KRaft mode
process.roles=broker,controller

# The node id associated with this instance's roles
node.id=2

# The connect string for the controller quorum
controller.quorum.voters=1@localhost:9095,2@localhost:9096,3@localhost:9097

############################# Socket Server Settings #############################

# The address the socket server listens on.
# Combined nodes (i.e. those with `process.roles=broker,controller`) must list the controller listener here at a minimum.
# If the broker listener is not defined, the default listener will use a host name that is equal to the value of java.net.InetAddress.getCanonicalHostName(),
# with PLAINTEXT listener name, and port 9092.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://:9093,CONTROLLER://:9096

# Name of listener used for communication between brokers.
inter.broker.listener.name=PLAINTEXT

# Listener name, hostname and port the broker will advertise to clients.
# If not set, it uses the value for "listeners".
**advertised.listeners=PLAINTEXT://{ec2์˜ public IP}:9092**

# A comma-separated list of the names of the listeners used by the controller.
# If no explicit mapping set in `listener.security.protocol.map`, default will be using PLAINTEXT protocol
# This is required if running in KRaft mode.
controller.listener.names=CONTROLLER

# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3

# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600

############################# Log Basics #############################

# A comma separated list of directories under which to store log files
log.dirs=/opt/kafka/logs/broker2

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=3

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=3
transaction.state.log.min.isr=2

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=24

# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000
  • ๋‹ค์Œ๊ณผ ๊ฐ™์ด server3.properties์— ๋Œ€ํ•ด์„œ๋„ ์ž‘์„ฑํ•ด์ค€๋‹ค.
  • ์ ‘์†ํ•˜๋Š” ์ปจํŠธ๋กค๋Ÿฌ๋Š” 9092 ํฌํŠธ๋กœ๋งŒ ์ ‘์†ํ•˜๋„๋ก ํ•˜์˜€๋‹ค.

๊ฐ ๋ธŒ๋กœ์ปค์˜ ๋ฐ์ดํ„ฐ ๋””๋ ‰ํ† ๋ฆฌ ์ดˆ๊ธฐํ™”

/opt/kafka/bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c /opt/kafka/config/kraft/server1.properties
/opt/kafka/bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c /opt/kafka/config/kraft/server2.properties
/opt/kafka/bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c /opt/kafka/config/kraft/server3.properties

๋ฐฑ๊ทธ๋ผ์šด๋“œ๋กœ ์‹คํ–‰

  • java ๊ธฐ๋ฐ˜์˜ ๋„๊ตฌ์ด๊ธฐ ๋•Œ๋ฌธ์— nohup ์„ ํ†ตํ•ด ๋ฐฑ๊ทธ๋ผ์šด๋“œ๋กœ ์‹คํ–‰ํ•œ๋‹ค.
nohup /opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/kraft/server1.properties > /opt/kafka/logs/broker1.log 2>&1 &
nohup /opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/kraft/server2.properties > /opt/kafka/logs/broker2.log 2>&1 &
nohup /opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/kraft/server3.properties > /opt/kafka/logs/broker3.log 2>&1 &

SonarQube

  • ์ด๋ฒˆ ์Šคํ”„๋ฆฐํŠธ ๋งˆ์ง€๋ง‰์ฏค ๋ฉ˜ํ† ๋‹˜๊ป˜์„œ ์†Œ๋‚˜ํ๋ธŒ๋ฅผ ํ†ตํ•œ ์ฝ”๋“œ ํ’ˆ์งˆ์„ ํ™•์ธํ•ด๋ณผ ๊ฒƒ์„ ์ถ”์ฒœํ•˜์…จ๋‹ค.
  • ์†Œ๋‚˜ํ๋ธŒ๋ฅผ ๋ฐฐํฌํ•œ ์ดํ›„ ํ™”๋ฉด์€ ๋‹ค์Œ๊ณผ ๊ฐ™๋‹ค.

 

  • ์–ด๋– ํ•œ ๋ถ€๋ถ„์—์„œ ๋ฌธ์ œ๊ฐ€ ์žˆ๋Š”์ง€ ํ™•์ธํ•  ์ˆ˜ ์žˆ์–ด, ๋‹ค์Œ ์Šคํ”„๋ฆฐํŠธ์— ๋ฆฌํŒฉํ† ๋ง์„ ํ•  ์˜ˆ์ •์ด๋‹ค.
  • ๋˜ํ•œ ์ž‘์„ฑ๋˜์–ด์žˆ๋Š” test์ฝ”๋“œ๊ฐ€ ์žˆ์œผ๋‚˜ build -x test๋กœ ํ…Œ์ŠคํŠธ์ฝ”๋“œ๋ฅผ ์ œ์™ธํ•˜๊ณ  ๋นŒ๋“œ๋ฅผ ํ•ด ์ปค๋ฒ„๋ฆฌ์ง€๊ฐ€ ๋‚˜์˜ค์ง€ ์•Š์•„ ์ด ๋ถ€๋ถ„๋„ ์ˆ˜์ •ํ•  ์˜ˆ์ •์ด๋‹ค!

 

๋„์›€๋ฐ›์€ ๋ธ”๋กœ๊ทธ๋“ค