Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
529 views
in Technique[技术] by (71.8m points)

perl - How to speed up MongoDB Inserts/sec?

I'm trying to maximize inserts per second. I currently get around 20k inserts/sec. My performance is actually degrading the more threads and CPU I use (I have 16 cores available). 2 threads currently do more per sec than 16 threads on a 16 core dual processor machine. Any ideas on what the problem is? Is it because I'm using only one mongod? Is it indexing that could be slowing things down? Do I need to use sharding? I wonder if there's a way to shard, but also keep the database capped...

Constraints: must handle around 300k inserts/sec, must be self-limiting(capped), must be query-able relatively quickly

Problem Space: must handle call records for a major cellphone company (around 300k inserts/sec) and make those call records query-able for as long as possible (a week, for instance)

#!/usr/bin/perl

use strict;
use warnings;
use threads;
use threads::shared;

use MongoDB;
use Time::HiRes;

my $conn = MongoDB::Connection->new;

my $db = $conn->tutorial;

my $users = $db->users;

my $cmd = Tie::IxHash->new(
    "create"    => "users",
    "capped"    => "boolean::true",
    "max"       => 10000000,
    );

$db->run_command($cmd);

my $idx = Tie::IxHash->new(
    "background"=> "boolean::true",
);
$users->ensure_index($idx);


my $myhash =
    {
        "name"  => "James",
        "age"   => 31,
        #    "likes" => [qw/Danielle biking food games/]
    };

my $j : shared = 0;

my $numthread = 2;  # how many threads to run

my @array;
for (1..100000) {
    push (@array, $myhash);
    $j++;
}

sub thInsert {
    #my @ids = $users->batch_insert(@array);
    #$users->bulk_insert(@array);
    $users->batch_insert(@array);
}

my @threads;

my $timestart = Time::HiRes::time();
push @threads, threads->new(&thInsert) for 1..$numthread;
$_->join foreach @threads; # wait for all threads to finish
print (($j*$numthread) . "
");
my $timeend = Time::HiRes::time();

print( (($j*$numthread)/($timeend - $timestart)) . "
");

$users->drop();
$db->drop();
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Writes to MongoDB currently aquire a global write lock, although collection level locking is hopefully coming soon. By using more threads you're likely introducing more concurrency problems as the threads block eachother while they wait for the lock to be released.

Indexes will also slow you down, to get the best insert performance it's ideal to add them after you've loaded your data, however this isn't always possible, for example if you're using a unique index.

To really maximise write performance, your best bet is sharding. This'll give you a much better concurrency and higher disk I/O capacity as you distribute writes across several machines.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...