Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 59 Current »

Overview

This benchmark is base-line finding configurations for optimization procedures of Athento ECM (Nuxeo Platform based).

Configurations

  1. Single server

    1. Hardware

      1. Ubuntu 14.04.3 LTS (trusty)
      2. 1 x Intel(R) Xeon(R) CPU E5-1630 v3 @ 3.70GHz
      3. 64Gb memory
      4. disk SATA 200Gb, cache read: 11641.27 MB/sec, buffered: 56.08 MB/sec
    2. Software

      1. Nuxeo CAP 6.0

        1. Jvm options: JAVA_OPTS=-Xms1024m -Xmx2048m  -Dfile.encoding=UTF-8 -Dmail.mime.decodeparameters=true -Djava.util.Arrays.useLegacyMergeSort=true  -Xloggc:${nuxeo.log.dir}/gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps

        2. JMX activated for Monitoring (only for benchmark with debugging for health constants)

        3. ACLOptimization = false (only for benchmark)

        4. Full-indexing = false (only for benchmark)
        5. NX-Quota = true
      2. PostgreSQL

        1. (PostgreSQL) 9.3.9

        2. postgresql-9.2-1002.jdbc4.jar 

        3. pg_clt options (changed into postgresql.conf)
          1. default
          1. configuration from Nuxeo benchmark: --effective_cache_size=16GB --shared_buffers=10GB --max_prepared_transactions=128 --work_mem=64MB --maintenance_work_mem=1GB --wal_buffers=24MB --checkpoint_completion_target=0.8 --checkpoint_segments=32 --checkpoint_timeout=15min --default_text_search_config=pg_catalog.french --fsync=off --full_page_writes=off --log_min_duration_statement=80ms --log_rotation_size=100MB --synchronous_commit=off --track_activities=on --track_counts=on --log_line_prefix='%t [%p]: [%l-1] ' --port=5436 --max_connections=64 --random_page_cost=2
        4. Analysis: https://wiki.postgresql.org/wiki/Performance_Analysis_Tools
    3. Tools

      1. JMeter 2.13 to make stress test: GUI & REST (source: nuxeo-bench-jmeter.jmx)

        1. Memory: -Xms=1g -Xmx=2g

      2. jvisualvm to monitoring JVM

      3.  powa (http://dalibo.github.io/powa/) to monitoring PostgreSQL

Results

Metrics

  • Transactions per second
  • Throughput transaction vs threads
  • Hits per second (Req/s)
  • Response time vs threads

Network

  • Database server is in the same server, so network latency is inappreciable.

Setup

Number of documents into repository: 800K

Number of users (threads): 30, 40 y 50

Ramp-up time (seconds): 60, 90 y 120

Bucle (iterations): 10, 30, 50

The representation for a configuration in a scenario will be: Users/Rampup/Iterations (i.e. 30/60/10)

Scenario 1: Basic Navigation (not randomize)

  • Login
  • Workspace and folder navigation across 3rd level
  • Open a document into folder
  • Tab navigation into document: (Summary, Relations, Comments, History)
  • Logout

Configuration a)

  • PostgresSQL with pg_ctl options = default
  • ACL optimization = false
  • full-text indexation = false
  • Quota-active = true
30/60/10

Transactions per second

Hits per second

Response time vs Threads

40/90/30

Transactions per second

Hits per second

Response time vs Threads

50/120/50

 

Transactions per second

Hits per second

Response time vs Threads


Postgres Read/Write
Query per secondBlocks per second


Configuration b)

    • PostgresSQL with pg_ctl options = '--effective_cache_size=16GB --shared_buffers=10GB --max_prepared_transactions=128 --work_mem=64MB --maintenance_work_mem=1GB --wal_buffers=24MB --checkpoint_completion_target=0.8 --checkpoint_segments=32 --checkpoint_timeout=15min --default_text_search_config=pg_catalog.spanish --fsync=off --full_page_writes=off --log_min_duration_statement=80ms --log_rotation_size=100MB --synchronous_commit=off --track_activities=on --track_counts=on --log_line_prefix='%t [%p]: [%l-1] ' --port=5433 --max_connections=64 --random_page_cost=2'
    • ACL optimization = false
    • full-text indexation = false
    • Quota-active = false
30/60/10

Transactions per second

Hits per second

Response time vs Threads

40/90/30

Transactions per second

Hits per second

Response time vs Threads

50/120/50

 

Transactions per second

Hits per second

Response time vs Threads

 

Postgres Read/Write
Query per secondBlocks per second

 

 

Scenario 2: Document creation (REST API with automation)

  • Check exists for the benchmark workspace (it creates it only first time)
  • Create a folder into bench workspace
  • Create a document into created folder (10 iter)
  • Attach blob into created document (10 iter)
  • Fetch create document

Configuration a)

  • PostgresSQL with pg_ctl options = '--effective_cache_size=16GB --shared_buffers=10GB --max_prepared_transactions=128 --work_mem=64MB --maintenance_work_mem=1GB --wal_buffers=24MB --checkpoint_completion_target=0.8 --checkpoint_segments=32 --checkpoint_timeout=15min --default_text_search_config=pg_catalog.spanish --fsync=off --full_page_writes=off --log_min_duration_statement=80ms --log_rotation_size=100MB --synchronous_commit=off --track_activities=on --track_counts=on --log_line_prefix='%t [%p]: [%l-1] ' --port=5436 --max_connections=64 --random_page_cost=2'
  • ACL optimization = false
  • full-text indexation = false
  • Quota-active = false

NOTE: Memory problems detected. GC overhead limit exceeded. So, decision to increased to -xms2g and -xmx4g

NOTE: Detected several thread-workers running for Quota process, which may be cause of GC overhead exceeded detected. 

NOTE: Detected problem with Too many open files exception. Solved with: https://doc.nuxeo.com/display/KB/java.net.SocketException+Too+many+open+files

NOTE: In 50/120/50 test, benchmark detects vcs pool is fully used. Solved with change pool to 100 into db and vcs values, with 450 threads in server.xml in HTTP connector.

 

30/60/10

Transactions per second

Transaction throughput vs Threads

Hits per second

Response time vs Threads

40/90/30

Transactions per second

Transaction throughput vs Threads

Hits per second

Response time vs Threads

50/120/50

 

Transactions per second

Transaction throughput vs Threads

Hits per second

Response time vs Threads

 

Postgres Read/Write
Query per secondBlocks per second

 

Database info
Inserts (sorted by number of calls)First hard selects (sorted by avg. runtime)

 

Result overview

  • Max transactions = 80/s
  • Max throughput = 480transactions/s
    • Injecting documents per second (Create document + Attach blob)
  • Max hits (GUI) = 1000req/s
  • Max hits (API) = 220req/s
  • Max concurrent users to achieve high performance: ~100

Conclusions

 

 

 

 

 

 

  • No labels