std::shared_mutex
,以便一旦获得C ++ 17支持就可以轻松升级到STL实现,而不是滚动自己的API。 我将所有旨在实现或补充STL功能的类放在“扩展的sTD”的
namespace xtd
中。原因是当/如果有适当的支持到达,我们可以将xtd
换成std
,并运行STL实现。 。读者总是以任何方式递归。此类实现为std::shared_mutex
,此类在标准C ++中没有等效项,但具有与xtd::recursive_shared_mutex
相同的API,但具有一些扩展。在下面的代码中,我使用了一个名为
std::shared_mutex
的自定义类,该类是完全兼容,可以替代xtd::fast_recursive_mutex
,但它在Windows上使用std::recursive_mutex
的锁定速度比CRITICAL_SECTION
快(至少在我们的编译器上)。类的低效率。xtd / shared_mutex.hpp
#pragma once
#include "fast_recursive_mutex.hpp"
#include <condition_variable>
namespace xtd {
namespace detail {
class shared_mutex_base {
public:
shared_mutex_base() = default;
shared_mutex_base(const shared_mutex_base&) = delete;
~shared_mutex_base() = default;
shared_mutex_base& operator = (const shared_mutex_base&) = delete;
protected:
using unique_lock = std::unique_lock < xtd::fast_recursive_mutex >;
using scoped_lock = std::lock_guard < xtd::fast_recursive_mutex >;
xtd::fast_recursive_mutex m_mutex;
std::condition_variable_any m_exclusive_release;
std::condition_variable_any m_shared_release;
unsigned m_state = 0;
void do_exclusive_lock(unique_lock& lk);
bool do_exclusive_trylock(unique_lock& lk);
void do_lock_shared(unique_lock& lk);
bool do_try_lock_shared(unique_lock& lk);
void do_unlock_shared(scoped_lock& lk);
void take_exclusive_lock();
bool someone_has_exclusive_lock() const;
bool no_one_has_any_lock() const;
unsigned number_of_readers() const;
bool maximal_number_of_readers_reached() const;
void clear_lock_status();
void increment_readers();
void decrement_readers();
static const unsigned m_write_entered = 1U << (sizeof(unsigned)*CHAR_BIT - 1);
static const unsigned m_num_readers = ~m_write_entered;
};
}
/// <summary> A shared_mutex implemented to C++17 STL specification.
///
/// This is a Readers-Writer mutex with writer priority. Optional native_handle_type and
/// native_handle members are not implemented.
///
/// For detailed documentation, see: http://en.cppreference.com/w/cpp/thread/shared_mutex. </summary>
class shared_mutex : public detail::shared_mutex_base {
public:
shared_mutex() = default;
shared_mutex(const shared_mutex&) = delete;
~shared_mutex() = default;
shared_mutex& operator = (const shared_mutex&) = delete;
/// <summary> Obtains an exclusive lock of this mutex. </summary>
void lock();
/// <summary> Attempts to exclusively lock this mutex. </summary>
/// <returns> true if it the lock was obtained, false otherwise. </returns>
bool try_lock();
/// <summary> Unlocks the exclusive lock on this mutex. </summary>
void unlock();
/// <summary> Obtains a shared lock on this mutex. Other threads may also hold a shared lock simultaneously. </summary>
void lock_shared();
/// <summary> Attempts to obtain a shared lock for this mutex. </summary>
/// <returns> true if it the lock was obtained, false otherwise. </returns>
bool try_lock_shared();
/// <summary> Unlocks the shared lock on this mutex. </summary>
void unlock_shared();
};
/// <summary> This is a non-standard class which is essentially the same as `shared_mutex` but
/// it allows a thread to recursively obtain write locks as long as the unlock count matches
/// the lock-count. </summary>
class recursive_shared_mutex : public detail::shared_mutex_base {
public:
recursive_shared_mutex() = default;
recursive_shared_mutex(const recursive_shared_mutex&) = delete;
~recursive_shared_mutex() = default;
recursive_shared_mutex& operator = (const recursive_shared_mutex&) = delete;
/// <summary> Obtains an exclusive lock of this mutex. For recursive calls will always obtain the
/// lock. </summary>
void lock();
/// <summary> Attempts to exclusively lock this mutex. For recursive calls will always obtain the
/// lock. </summary>
/// <returns> true if it the lock was obtained, false otherwise. </returns>
bool try_lock();
/// <summary> Unlocks the exclusive lock on this mutex. </summary>
void unlock();
/// <summary> Obtains a shared lock on this mutex. Other threads may also hold a shared lock simultaneously. </summary>
void lock_shared();
/// <summary> Attempts to obtain a shared lock for this mutex. </summary>
/// <returns> true if it the lock was obtained, false otherwise. </returns>
bool try_lock_shared();
/// <summary> Unlocks the shared lock on this mutex. </summary>
void unlock_shared();
/// <summary> Number recursive write locks. </summary>
/// <returns> The total number of write locks. </returns>
int num_write_locks();
/// <summary> Query if this object is exclusively locked by me. </summary>
/// <returns> true if locked by me, false if not. </returns>
bool is_locked_by_me();
private:
std::thread::id m_write_thread;
int m_write_recurses = 0;
};
}
shared_mutex.cpp
#include "pch/pch.hpp"
#include "xtd/shared_mutex.hpp"
#include <thread>
namespace xtd {
// ------------------------------------------------------------------------
// class: shared_mutex_base
// ------------------------------------------------------------------------
namespace detail {
void shared_mutex_base::do_exclusive_lock(unique_lock &lk){
while (someone_has_exclusive_lock()) {
m_exclusive_release.wait(lk);
}
take_exclusive_lock(); // We hold the mutex, there is no race here.
while (number_of_readers() > 0) {
m_shared_release.wait(lk);
}
}
bool shared_mutex_base::do_exclusive_trylock(unique_lock &lk){
if (lk.owns_lock() && no_one_has_any_lock()) {
take_exclusive_lock();
return true;
}
return false;
}
void shared_mutex_base::do_lock_shared(unique_lock& lk) {
while (someone_has_exclusive_lock() || maximal_number_of_readers_reached()) {
m_exclusive_release.wait(lk);
}
increment_readers();
}
bool shared_mutex_base::do_try_lock_shared(unique_lock& lk) {
if (lk.owns_lock() && !someone_has_exclusive_lock() &&
!maximal_number_of_readers_reached()) {
increment_readers();
return true;
}
return false;
}
void shared_mutex_base::do_unlock_shared(scoped_lock& lk) {
decrement_readers();
if (someone_has_exclusive_lock()) { // Some one is waiting for us to unlock...
if (number_of_readers() == 0) {
// We were the last one they were waiting for, release one thread waiting
// for
// all shared locks to clear.
m_shared_release.notify_one();
}
}
else {
// Nobody is waiting for shared locks to clear, if we were at the max
// capacity,
// release one thread waiting to obtain a shared lock in lock_shared().
if (number_of_readers() == m_num_readers - 1)
m_exclusive_release.notify_one();
}
}
void shared_mutex_base::take_exclusive_lock() { m_state |= m_write_entered; }
bool shared_mutex_base::someone_has_exclusive_lock() const {
return (m_state & m_write_entered) != 0;
}
bool shared_mutex_base::no_one_has_any_lock() const { return m_state != 0; }
unsigned shared_mutex_base::number_of_readers() const {
return m_state & m_num_readers;
}
bool shared_mutex_base::maximal_number_of_readers_reached() const {
return number_of_readers() == m_num_readers;
}
void shared_mutex_base::clear_lock_status() { m_state = 0; }
void shared_mutex_base::increment_readers() {
unsigned num_readers = number_of_readers() + 1;
m_state &= ~m_num_readers;
m_state |= num_readers;
}
void shared_mutex_base::decrement_readers() {
unsigned num_readers = number_of_readers() - 1;
m_state &= ~m_num_readers;
m_state |= num_readers;
}
}
// ------------------------------------------------------------------------
// class: shared_mutex
// ------------------------------------------------------------------------
static_assert(std::is_standard_layout<shared_mutex>::value,
"Shared mutex must be standard layout");
void shared_mutex::lock() {
std::unique_lock<xtd::fast_recursive_mutex> lk(m_mutex);
do_exclusive_lock(lk);
}
bool shared_mutex::try_lock() {
std::unique_lock<xtd::fast_recursive_mutex> lk(m_mutex, std::try_to_lock);
return do_exclusive_trylock(lk);
}
void shared_mutex::unlock() {
{
std::lock_guard<xtd::fast_recursive_mutex> lg(m_mutex);
// We released an exclusive lock, no one else has a lock.
clear_lock_status();
}
m_exclusive_release.notify_all();
}
void shared_mutex::lock_shared() {
std::unique_lock<xtd::fast_recursive_mutex> lk(m_mutex);
do_lock_shared(lk);
}
bool shared_mutex::try_lock_shared() {
std::unique_lock<xtd::fast_recursive_mutex> lk(m_mutex, std::try_to_lock);
return do_try_lock_shared(lk);
}
void shared_mutex::unlock_shared() {
std::lock_guard<xtd::fast_recursive_mutex> _(m_mutex);
do_unlock_shared(_);
}
// ------------------------------------------------------------------------
// class: recursive_shared_mutex
// ------------------------------------------------------------------------
void recursive_shared_mutex::lock() {
std::unique_lock<xtd::fast_recursive_mutex> lk(m_mutex);
if (m_write_recurses == 0) {
do_exclusive_lock(lk);
}
else {
if (m_write_thread == std::this_thread::get_id()) {
if (m_write_recurses ==
std::numeric_limits<decltype(m_write_recurses)>::max()) {
throw std::system_error(
EOVERFLOW, std::system_category(),
"Too many recursions in recursive_shared_mutex!");
}
}
else {
// Different thread trying to get a lock.
do_exclusive_lock(lk);
assert(m_write_recurses == 0);
}
}
m_write_recurses++;
m_write_thread = std::this_thread::get_id();
}
bool recursive_shared_mutex::try_lock() {
std::unique_lock<xtd::fast_recursive_mutex> lk(m_mutex, std::try_to_lock);
if ((lk.owns_lock() && m_write_recurses > 0 && m_write_thread == std::this_thread::get_id()) ||
do_exclusive_trylock(lk)) {
m_write_recurses++;
m_write_thread = std::this_thread::get_id();
return true;
}
return false;
}
void recursive_shared_mutex::unlock() {
bool notify_them = false;
{
std::lock_guard<xtd::fast_recursive_mutex> lg(m_mutex);
if (m_write_recurses == 0) {
throw std::system_error(ENOLCK, std::system_category(),
"Unlocking a unlocked mutex!");
}
m_write_recurses--;
if (m_write_recurses == 0) {
// We released an exclusive lock, no one else has a lock.
clear_lock_status();
notify_them = true;
}
}
if (notify_them) {
m_exclusive_release.notify_all();
}
}
void recursive_shared_mutex::lock_shared() {
std::unique_lock<xtd::fast_recursive_mutex> lk(m_mutex);
do_lock_shared(lk);
}
bool recursive_shared_mutex::try_lock_shared() {
std::unique_lock<xtd::fast_recursive_mutex> lk(m_mutex, std::try_to_lock);
return do_try_lock_shared(lk);
}
void recursive_shared_mutex::unlock_shared() {
std::lock_guard<xtd::fast_recursive_mutex> _(m_mutex);
return do_unlock_shared(_);
}
int recursive_shared_mutex::num_write_locks() {
std::lock_guard<xtd::fast_recursive_mutex> _(m_mutex);
return m_write_recurses;
}
bool recursive_shared_mutex::is_locked_by_me() {
std::lock_guard<xtd::fast_recursive_mutex> _(m_mutex);
return m_write_recurses > 0 && m_write_thread == std::this_thread::get_id();
}
}
实现基于本工作文件中的参考实现。
#1 楼
您已经知道的便捷工具。您的自动化脚本可能一部分用于构建新文件。
#pragma once
但是对于新来者,我会指出更多标准的包含防护器在任何地方都兼容。您认为有多少人非常了解关于他们的规则?另外,您几乎在其他任何地方都使用
_
,为什么要在最后更改样式?我也将它们分组。我先将构造函数/析构函数放在一起,然后将复制运算符放在一起: std::lock_guard<xtd::fast_recursive_mutex> _(m_mutex);
我会这样做:
shared_mutex() = default;
shared_mutex(const shared_mutex&) = delete;
~shared_mutex() = default;
shared_mutex& operator = (const shared_mutex&) = delete;
不喜欢您的状态
您将两个状态组合成一个变量
lk
。这使得阅读代码更加困难。针对可读性进行优化或在此处的代码周围添加更多注释:花了我几分钟时间才能确定您在此处要实现的目标。
shared_mutex() = default;
~shared_mutex() = default;
// Disable copy semantics.
shared_mutex(const shared_mutex&) = delete;
shared_mutex& operator = (const shared_mutex&) = delete;
基本上,您可以使用这些常量将状态组合到
m_state
中,其中最高位用于指示排他锁,并且所有其他位都用于计算共享锁的数量。错误
static const unsigned m_write_entered = 1U << (sizeof(unsigned)*CHAR_BIT - 1);
static const unsigned m_num_readers = ~m_write_entered;
// ^^^^^^^^^^^^^ don't like that name it needs "max" in it.
问题
您将
m_state
用于等待m_exclusive_release
中的排他锁的线程,并将其用作试图在do_exclusive_lock()
中获取共享锁的线程的溢出列表。取决于您对排他锁优先级的语义,这可能无法按您的预期工作。我希望等待排他锁的线程比那些等待共享锁的线程具有更高的优先级。但是当当前排他锁释放时,任何等待线程都有平等的机会抓住锁。
因此,等待共享锁的几个(但不一定是全部)线程可能能够在需要排他锁的线程获得抓住锁的机会之前获得锁。因此,排他锁可能需要再次等待。
方案:
我们有很多具有共享锁的线程并达到了最大值。
我们添加了一个(或几个)shared_lock。
这些在
do_lock_shared()
上排队。共享锁将被释放。我们现在有一个线程需要排他锁(但已经被占用)。
因此,此线程被放入
m_exclusive_release
列表中(一个或多个线程在等待共享锁)。当具有共享锁的线程完成工作时,它们将调用
m_shared_release
,直到不再有共享锁为止。这将强制对m_exclusive_release
的调用,并且具有排他锁的第一个线程(等待do_unlock_shared()
)将被释放并正常运行,直到通过对m_shared_release.notify_one();
的调用释放对锁的调用m_shared_release
为止。这将释放所有尝试获取shared_lock的线程,并且所有线程尝试获取互斥锁。您无法确定哪个线程将首先获得锁,因此如果排他锁紧挨着排它,则该线程是随机的。因为排他锁可能会因为需要等待而导致资源短缺以便在共享锁有机会运行之前将其释放两次。
我敢肯定您不会死锁或实际上不会阻止独占锁的发生,但是我不认为这是有利的行为。
设计
共享锁在
unlock()
和m_exclusive_release.notify_all();
上都不能正常工作吗?您不能将代码推送到共享的基类中吗?评论
\ $ \ begingroup \ $
对于您的问题示例中的第一个互斥锁之前的最大读者影响力,我有些困惑。假设读取器溢出,我们得到一个排他锁请求,新的读取请求在m_exclusive_release上排队,并且读取器单调为0并获得排他锁。因此,这并不会真正影响您的示例。
\ $ \ endgroup \ $
–艾米莉·L。
16-12-29在4:15
\ $ \ begingroup \ $
那就是说,我同意你的看法,如果专属或共享服务员首先进入专属区域是随机的。如果有无限数量的线程在等待共享锁,那么排他锁线程从OS获得时间片之前的时间可能没有限制。但是,我希望有一个合理的OS可以将通知的线程放入队列中以“很快”运行(我记得有关在IO操作(和信号)上阻塞线程的某些事情在Windows和Linux中具有最高优先级,但请不要在上面引用我)。因此,我希望最多会延迟等待的读者。但是,那仍然很糟糕。
\ $ \ endgroup \ $
–艾米莉·L。
16 Dec 29'4:31
\ $ \ begingroup \ $
关于将共享锁定推入基类,是和否。它将具有正确的行为,但就它们根本不应该共享基类的意义而言,当前的设计也很糟糕。相反,应该使用组合来减少代码重复,因为继承意味着它们应该是Liskov可替换的,这是错误的,因为它们在互斥锁定上具有不同的语义。在进行此操作时,我希望两个类都完全指定它们的接口,以使我的同事在尝试使用它们时变得容易,因此他们不必继续研究超类定义。
\ $ \ endgroup \ $
–艾米莉·L。
16-12-29在4:47
评论
有什么理由可以避免boost库的实现?由于我无法控制的各种原因,该项目无法使用boost。
附带说明一下,如果我没记错的话,sizeof(unsigned)* CHAR_BIT-1应该等效于std :: numeric_limits