{"id":1575,"date":"2020-05-16T20:36:48","date_gmt":"2020-05-16T12:36:48","guid":{"rendered":"http:\/\/bloo.heing.fun\/?p=1575"},"modified":"2020-05-30T18:05:52","modified_gmt":"2020-05-30T10:05:52","slug":"%e7%ac%94%e8%ae%b0%ef%bc%9ac-concurrency-in-action","status":"publish","type":"post","link":"https:\/\/bloo.heing.fun\/?p=1575","title":{"rendered":"\u7b14\u8bb0\uff1aC++ Concurrency in Action"},"content":{"rendered":"\n<pre class=\"wp-block-code\"><code> * \u9605\u8bfb\u8fdb\u5ea6\uff1apdf161\/530, \u4e66p138,05-21\n * \u9605\u8bfb\u8fdb\u5ea6\uff1apdf192\/530, \u4e66p169,05-22\n * \u9605\u8bfb\u8fdb\u5ea6\uff1apdf210\/530, \u4e66p187,05-23\n * \u9605\u8bfb\u8fdb\u5ea6\uff0cpdf251\/530\uff0c\u4e66p228,20-05-24\n\n# C++ Concurrency in Action\n\n## 1 \u5f15\u8ff0\n##### 1.1 concurrency\u662f\u5565 \n##### \u591a\u8fdb\u7a0b\u3001\u591a\u7ebf\u7a0b\n###### \u591a\u8fdb\u7a0b\n* \u91c7\u7528os\u63d0\u4f9b\u7684\u8fdb\u7a0b\u95f4\u901a\u4fe1\uff0c\u4e0d\u540c\u8fdb\u7a0b\u9694\u79bb\u591a\n\n###### \u591a\u7ebf\u7a0b\n* \u90fd\u5c5e\u4e8e\u4e00\u4e2a\u8fdb\u7a0b\uff0c\u53ef\u4e92\u8bbf\u8d44\u6e90\u591a\n* c\u827911\u4e4b\u524d\u65e0\u591a\u7ebf\u7a0b\u6a21\u5757\uff0cc\u827911\u4ecd\u65e0\u591a\u8fdb\u7a0b\u6a21\u5757\u3002\u6ca1\u7684\u65f6\u5019\uff0c\u9700\u8981\u7528os\u63d0\u4f9b\u7684\u6a21\u5757\n* \u6b64\u4e66\u8c08\u591a\u7ebf\u7a0b\u3002\n\n##### 1.2 \u7528\u591a\u7ebf\u7a0b\u539f\u56e0\n###### 1. \u4e0d\u540c\u7ebf\u7a0b\u5904\u7406\u4e0d\u540c\u4e8b\u7269\u3002\u5982ui\u7ebf\u7a0b\u3001\u540e\u53f0\u7ebf\u7a0b\n###### 2. \u63d0\u9ad8\u6548\u7387\n* \u591a\u7ebf\u7a0b\u8dd1\u4e0d\u540c\u90e8\u5206 task parallelism\n* \u591a\u7ebf\u7a0b\u8dd1\u4e0d\u540c\u6570\u636e data parallelism\n\n##### 1.3 \u4f55\u65f6\u4e0d\u7528\u591a\u7ebf\u7a0b\n* \u591a\u7ebf\u7a0b\u96be\u5199\u96be\u8bfb\uff0c\u6613\u51fabug\n\n##### 1.3 c\u8279\u7684concurrency&amp;&amp;multithreading\n###### 1.3.1 \u5386\u53f2\n* c\u827998\u65e0\u591a\u7ebf\u7a0b\uff0c\u9700\u4f7f\u7528os\u63d0\u4f9b\u7684\u5e93\uff0c\u6216\u4f7f\u7528\u5c01\u88c5\u597d\u7684\u4e09\u65b9\u5e93\u5982boost\n* \u56e0\u4e3ac\u827998\u6807\u51c6\u7684\u5185\u5b58\u6a21\u578b\u4e0d\u652f\u6301\uff0c\u5c01\u88c5\u597d\u7684\u4e09\u65b9\u5e93\u4ecd\u53ef\u80fd\u9020\u6210\u4e00\u4e9b\u95ee\u9898\u3002\u5982\u4e0d\u540c\u5e73\u53f0\u884c\u4e3a\u4e0d\u4e00\u6837\n\n\n###### 1.3.2 c\u827911\u6807\u51c6\u5bf9concurrentcy\u7684\u652f\u6301\n* \u591a\u7ebf\u7a0b\u65b0\u7279\u6027\n\t* \u65b0\u5185\u5b58\u6a21\u578b\n\t* \u7ba1\u7406\u7ebf\u7a0b &lt;- \u7b2c\u4e8c\u7ae0\n\t* \u4fdd\u62a4sharing data &lt;- \u7b2c\u4e09\u7ae0\n\t* \u7ebf\u7a0b\u95f4\u540c\u6b65 &lt;- \u7b2c\u56db\u7ae0\n\t* low-level atomic operations &lt;- \u7b2c\u4e94\u7ae0\n\n* \u539f\u5b50\u64cd\u4f5c\u65b0\u7279\u6027\n\t* \u4e0d\u7528\u518d\u5199 \u4e0e\u7279\u5b9a\u5e73\u53f0\u6709\u5173\u7684 \u6c47\u7f16\u4ee3\u7801 \n\n###### 1.3.3 C\u8279\u7ebf\u7a0b\u5e93\u7684\u6548\u7387\n* \u8bbe\u8ba1\u601d\u60f3\u662f\n\t* \u51cf\u5c11\u62bd\u8c61\u5e26\u6765\u7684\u6027\u80fd\u635f\u5931\uff0c\u4ee5\u81f3\u4e8e\u7528\u90a3\u4e9b\u66f4\u4f4e\u7ea7api\u7684api\u5e26\u6765\u7684\u6027\u80fd\u63d0\u5347\u5c11\u4e14\u9ebb\u70e6\n\n* \u62bd\u8c61\u7a0b\u5ea6\u9ad8\u7684\u7c7b\uff0c\u867d\u56de\u5e26\u6765cost\uff0c\u4f46cost\u548c\u81ea\u5df1\u624b\u5199\u5b9e\u73b0\u8fd9\u4e9b\u5dee\u4e0d\u591a\u3002\u7f16\u8bd1\u5668\u4e5f\u53ef\u80fd\u5185\u8054\u4f18\u5316\u4e4b\u7c7b\u7684\u3002\n* \u8bbe\u8ba1\u7a0b\u5e8f\u51cf\u5c11\u7ade\u4e89 &lt;- \u7b2c\u516b\u7ae0\n\n##### 1.4 hello world\n* \u77e5\u8bc6\u70b9\n\t* &lt;thread> &lt;- \u7ba1\u7406\u7ebf\u7a0b\n\t* \u4fdd\u62a4sharing data &lt;- \u5176\u4ed6\u5934\u6587\u4ef6\n\t* \u6bcf\u4e2a\u7ebf\u7a0b \u9700\u6709 \u521d\u59cb\u5316\u51fd\u6570\n \t* thread_name.join() \u8ba9calling thread\u5f85\u7ed3\u675f\n\n## 2 \u7ba1\u7406\u7ebf\u7a0b\n###### \u6d89\u53ca\n* \u591a\u79cd\u521d\u59cb\u5316\u7ebf\u7a0b\u7684\u65b9\u5f0f\n* \u7b49\u5f85\u7ebf\u7a0b\u7ed3\u675f \u548c \u8ba9\u7ebf\u7a0b\u8dd1\u8d77\u6765\n* uniquely identifying threads\n\n#### 2.1 \u7ebf\u7a0b\u7ba1\u7406\u57fa\u64cd\n##### 2.1.1 \u8dd1\u8d77\u7ebf\u7a0b\n* callable\u7684\u90fd\u53ef\u4ee5\n\t* \u5982\u5b9e\u73b0function call operater\u7684\u7c7b\n\n```\nclass A {\npublic:\n    void operator () () const {\n       \n    }\n}\n\/\/ \u521d\u59cb\u5316\nA a;\nthread t (a);\n\nx thread t (A());\nthread t ( (A()) ) \/\/ \u9632\u6b62\u88ab\u89e3\u91ca\u6210\u51fd\u6570\u58f0\u660e\nthread t {A()} \/\/ \u65b0\u7684uniform initialization syntax\nthread t (&#91;]{ cout&lt;&lt;\"hi\"; });\n```\n\n* \u58f0\u660e\u7ebf\u7a0b\u4e4b\u540e\uff0c\u5fc5\u987b join()\u6216detach(), \u5426\u5219\u7a0b\u5e8fterminated. &lt;- std::thread destructor calls std::terminate()\n\t* \u53ea\u9700\u8981\u5728\u7ebf\u7a0b\u9500\u6bc1\u88ab\u81ea\u52a8\u9500\u6bc1\u4e4b\u524d(\u7ebf\u7a0b\u662f\u5c40\u90e8\u53d8\u91cf\uff0c\u56e0\u6b64\u4f1a\u88ab\u81ea\u52a8\u9500\u6bc1)\uff0c\u8c03\u7528join\/detach\u5373\u53ef\u963b\u6b62&#91;\u7ebf\u7a0b\u5728\u6790\u6784\u51fd\u6570\u4e2d\u8c03\u7528terminate()] \n\t* \u4e00\u65e6detach(),\u6b64\u7ebf\u7a0b\u7684\u5f15\u7528\u5931\u6548\uff0c\u4e0d\u53ef\u518djoin(&lt;-\u5426\u5219\u4f1aruntime error)\u3002owernership and control are passed over to the C++ Runtime Library\n\t* \u786e\u4fdd\u4ee5\u9632 exceptions \u8ba9join()\/detach()\u6ca1\u6709\u6267\u884c\u3002\n\t* \u82e5detach, \u9700\u81ea\u884c\u786e\u4fdd\u6b64\u7ebf\u7a0b\u8bbf\u95ee\u7684\u53d8\u91cf\u4e00\u76f4\u6709\u6548\u3002\n\t\t* \u82e5thread\u7531callable object \u6784\u9020\uff0c\u5219thread\u590d\u5236\u8fd9\u4e2acallable object, \u56e0\u6b64\u539f\u5148\u7684\u53ef\u4ee5\u9500\u6bc1\u3002\n\t\t* \u82e5thread\u542b\u6709 \u5c40\u90e8\u53d8\u91cf \u7684\u6307\u9488\u6216\u5f15\u7528\uff0c\u5219\u53ef\u80fd\u662f\u4ea7\u751f\u60ac\u5782\u6307\u9488\u90a3\u79cd\u73b0\u8c61\u3002\n\t\t\t* thread\u53ef\u901a\u8fc7\u590d\u5236\u5916\u90e8\u7684\u5c40\u90e8\u53d8\u91cf\uff0c\u6765\u4fdd\u969c \n\n##### 2.1.2 Waiting for a thread to complete\n* \u5e38\u5e38\u7528\u4e8e\uff1a\u542f\u52a8\u591a\u4e2a\u7ebf\u7a0b\uff0c\u7b49\u5f85\u8fd9\u4e9b\u5b8c\u6210\u3002\n* join()\u4f1a \u6e05\u7406thread\u6709\u5173\u7684\u5b58\u50a8\u3002\n\t* \u56e0\u6b64join()\u4ec5\u53ef\u8c03\u7528\u4e00\u6b21\u3002joinable()\u8fd4\u56de\u662f\u5426\u53efjoin \n* \u82e5\u60f3\n\t* \u5224\u65adthread\u662f\u5426\u5b8c\u6210\n\t* \u7b49\u5f85\u4e00\u6bb5\u65f6\u95f4\n\t* \u5219\u9700\u7528\u5176\u4ed6\u673a\u5236\uff0c\u5982condition variables\u3001futures \n\n##### 2.1.3 waiting in exceptional circumstances\n* \u5728catch\u4e2djoin()\n* \u82e5\u4e00\u5b9a\u8981join() \/\/ \u4e3a\u4e86\u4fdd\u8bc1\u51fd\u6570\u9000\u51fa\u65f6\uff0c\u7ebf\u7a0b\u4e5f\u9000\u51fa\n\t* \u5219\u53ef\u4f7f\u7528\u4e00\u4e2a\u7c7b \u5305\u88c5thread, \u5728\u8fd9\u4e2a\u7c7b\u7684\u6790\u6784\u51fd\u6570\u4e2djoin(\uff09\u3002\u8fd9\u4e2a\u7c7b\u5b58\u50a8thread\u7684\u5f15\u7528\u3002\n\n##### 2.1.4 running thread in the background\n\n#### 2.2 Passing arguments to a thread function\n* \u53c2\u6570\u9ed8\u8ba4\u88abcopy\n\t* \u56e0\u6b64thread\u6784\u9020\u4e0d\u80fd\u6b63\u786e\u5904\u7406 \u3010\u53c2\u6570\u4e3a\u5f15\u7528\u3011 \n\t\t* \u7528std::ref \n* thread\u6784\u9020\u5668,\u5148copy\u4f20\u5165\u7684\u53c2\u6570\uff0c\u63a5\u7740\u5728thread\u7684context\u4e0b\u624d\u8fdb\u884c\u8f6c\u6362\u3002\n\n* hack\u64cd\u4f5c\n\n```\nclass A {\npublic:\n    void f(int);\n};\nA a;\nint i=1;\nthread t(&amp;X::f, &amp;a,i);\n```\n\n* thread moveable,but aren't copyable\n\n#### 2.3 Transferring ownership of a thread\n* \u82e5thread t\u5df2\u6709\u7ebf\u7a0b\u76f8\u5173\u8054\uff0c\u5219t=move(t2)\/*t2\u4e5f\u4e0e\u7ebf\u7a0b\u76f8\u5173\u8054*\/\u5c06\u4f1a\u4f7f t\u8c03\u7528terminate();\n\n```\nyou cann't just 'drop' a thread by assigning a new value to the std::thread object that manages it;\n```\n\n* return a thread from a function\n\n```\nthread f() {\n    void f1();\n    return thread(f1);\n}\nthread f() {\n    void f1(int);\n    thread t(f1, 1);\n    return t;\n}\n```\n\n\n* pass a thread to function call\n\n```\nvoif f1();\n\/\/ moving from temporary object is automatic and implicit\nf(thread(f1));\nf(move(t));\n```\n\n* scoped_thread\n\n```\nclass scoped_thread {\n    std::thread t;\npublic:\n    explicit scoped_thread(std::thread t_) : t(std::move(t_)) {\n        if (!t.joinable()) {\n            throw std::logic_error(\"No thread\");\n        }\n    }\n    ~ scoped_thread() {\n        t.join();\n    }\n    scoped_thread(scoped_thread const&amp;) = delete;\n    scoped_thread&amp; operator=(scoped_thread const&amp;) = delete;\n}\n```\n\n* \u7ebf\u7a0b\u6c60\u56e0move\u5f97\u4ee5\u5b9e\u73b0\n\t* \u5982\u7528vector \n\n```\nfor (auto i=0; i&lt;5; ++i) {\n    vs.push_back(thread(f1, i));\n}\n\/\/ call join() on each thread in trun;\nfor_each(vs.begin(),vs.end(),std::mem_fn(&amp;thread::join))\n```\n\n\n#### 2.4 Choosing the number of threads at runtime\n* because you can not return a value directyly from a thread\n\t* you  pass in a reference.\n\t* use futures\n\n* get the number from hardware\n\n```\nusigned long long const  hardware_threads = std::thread::hardware_concurrency();\n```\n\n#### 2.5 identifying threads\n* std::thread::id\n\t* t.get_id()\n\t* std::this_thread::get_id();\n* can be copied and compared\n* std::hash&lt;std::tread::id> \n\n## chapter3 sharing data between threads\n###### \u6d89\u53ca\n* problems with sharing data between threads\n* protecting data with mutexes\n* alternative facilities for protecting shared data\n\n### 3.1 problems with sharing data between threads\n* the problems with sharing data between threads are all due to modifying data.\n* invariants\n\t* doubly linked list:\n\t\t* \u82e5A\u7684next\u4e3aB\uff0c\u5219B\u7684prev\u4e3aA\n\t\t\t* \u5220\u9664\u8282\u70b9\u8fc7\u7a0b\u4e2d\uff0cinvariants\u53ef\u80fd\u88ab\u6253\u7834   \n\n\t* the simplestpotential problem with modifying data that's shared between threads is that of broken invariants.\n\n#### 3.1.1 Race conditions\n* a race condition is anything where the outcome depends on the relative ordering of executing of operations of operations on two or more threads.\n\t* When talking about concurrency, the term *race condition* is usually used to mean a *problematic* race  condtion;  \n\t* *data race* mean the specific type of race condtion that arises because of concurrent modification to a single object.\n\t\t* *data race* cause *undefined behavior* \n* it's when the race condition leads to broken invariants that there is a problem.\n\t* problematic race conditions typically occur where completing an operation requires modifications of two or more distinct pieces of data\n\t\t* in these scenes, because data must be modified in separate instructions, another thread could potentially access the data structure.\n\n#### 3.1.2 Avoiding problematic race conditions\n* several ways:\n\t* \u7c7b\u4f3ctransaction\u8fd9\u79cd\u601d\u60f3 \n\t* *lock-free programming*. modify the design of design of data sructure and its invariants.\n\n* most basic mechannism for protecting shared data provided by  the  c++ is *mutex*\n\n### 3.2 Protecting shared data with mutexes\n#### 3.2.1 Using mutexes in c++\n* mutex\n\t* \u4e0d\u63a8\u8350\u76f4\u63a5\u7528\u3002\u56e0\u9700\u624b\u52a8unlock(\u5728\u51fa\u73b0exception\u4e4b\u7c7b\u7684\u60c5\u51b5\u4e0b\n\t* \u53ef\u7528std::lock_guard \n\n```\n\/*\nlock_guard\u4f3cunique_lock,\u4f46unique_lock\u53ef\u4ee5\u81ea\u7531lock()\u3001unlock(),\u800clock_guard\u53ea\u53ef\u5728\u6784\u9020\u65f6lock\n*\/\nstd::lock_guard&lt;std::mutex> guard(mtx);\n\nstd::lock(lhs.mtx);\n\/\/ \u4e0a\u4e00\u53e5lock\u5df2\u7ecf\u9501\u4e86\nstd::lock_guard&lt;std::mutex> guard(hs.mtx,std::adopt_lock);\n```\n\n    * in the majority of cases it's common to group the mutex and the protected data together in a class rather than use global variables. if all the member functions of the class lock the mutex before accessing any other data members and unlock it when donw,  \n\n#### 3.2.2 Structuring code for protecting shared data\n* nonename\n\t* \u6210\u5458\u51fd\u6570\u4e0d\u8fd4\u56de \u6307\u9488\u548c\u5f15\u7528\n\t* \u6210\u5458\u51fd\u6570\u4e0d\u628a \u6307\u9488\u548c\u5f15\u7528 \u4f20\u7ed9\u8c03\u7528\u7684\u51fd\u6570 \n\n#### 3.2.3 Spotting race conditions inherent in interfaces\n###### \u51fa\u73b0\u4e3e\u4f8b\n\n```\n\u5982\u7ed9\u6808\u52a0\u4e2a\u5927\u9501\u3002\n\u5f53\u4f7f\u7528\u8005\u8c03\u7528 size()\/empty() \u83b7\u5f97\u7ed3\u679c\u540e\uff0c\u800c\u6839\u636e\u8fd4\u56de\u7684\u4fe1\u606f\u8fdb\u884c\u52a8\u4e4b\u524d\uff0c\u6808\u7684\u72b6\u6001\u53ef\u80fd\u5df2\u7ecf\u6539\u53d8\u3002\n\npop()\u65f6\uff0c\u6808\u5185\u90e8\u5b58\u50a8\u6570\u636e\u7ed3\u6784\u5df2\u7ecfpop\u4e86\uff0c\u4f46\u628a\u503c\u8fd4\u56de\u8fc7\u7a0b\u4e2d\u51fa\u73b0\u4e86exception\u3002\u5219\u5bfc\u81f4\u6570\u636e\u6c38\u8fdc\u4e22\u5931\u3002\n```\n\n###### race condition in interface.\n\t* \u591a\u4e2a\u7ebf\u7a0b\u5bf9\u4e00\u4e2a\u7c7b\u7684\u4e0d\u540c\u63a5\u53e3\u7684\u7d27\u5bc6\u8c03\u7528\u4ea7\u751f \n\t* the solution is to change the interface\n\n* \u652f\u6301\u591a\u7ebf\u7a0b\u7684\u6808 \n\t* option 1 Pass in a reference\n\t\t* \u9700\u8981\u53ef\u65e0\u53c2\u6784\u9020\n\t\t* \u9700\u8981\u53ef\u8d4b\u503c\n\t\t\t* many user-defined types do not support assignment,but support move consruction or copy construction\n\n\t* option2: require a no-throw copy constructor or move constructor\n\n```\nstd::is_nothrow_copy_constructible\nstd::is_nothrow_move_constructible\n```\n\n   * option3: return a pointer to the popped item\n   \t* shared_pt\n   * pop\u65f6\u518d\u68c0\u67e5\u4e00\u904d\u662f\u5426\u4e3a\u7a7a\u3002\u4e3a\u7a7a\u5219throw exception \n\n###### note\n* problematic race conditiosn in interfaces essentially arise because of locking at too small a granulaity; the protection does not cover the entriety of  the desired operation.\n\n#### 3.2.4 Deadlock: the problem and a solution\n* advice\n\t* always lock the two mutexes in the same order.\u6bd4\u5982\u603b\u5148\u9501A a\uff0c\u518d\u9501 \n\n```\n\u4f46\uff1a\n\u67d0\u51fd\u6570f(A,A),\n\u5219\u82e5\u4e24\u4e2a\u7ebf\u7a0b\uff0c\u4e00\u4e2af(a,b),\u4e00\u4e2af(b,a)\uff0c\u5219\u4e5f\u4f1a\u6b7b\u9501\u3002\n```\n\n###### lock()\u9501\u591a\u4e2a\n* \u9501\u4e0d\u4f4f\u6240\u6709\u65f6\uff0c\u4f1a\u629b\u51fa\u5f02\u5e38\u5e76\u91ca\u653e\u6240\u6709\u9501\n\n```\nlock(mtx1, mtx2);\nlock_guard&lt;mutex> lg1(mtx1, adopt_lock);\nlock_guard&lt;mutex> lg2(mtx2, adopt_lock);\n```\n\n#### 3.2.5 \u9632\u6b62\u6b7b\u9501\u7684\u66f4\u591a\u6307\u5bfc\n###### 1. \u6b7b\u9501\u51fa\u73b0\u7684\u5176\u4ed6\u60c5\u51b5\n* \u4e24\u4e2athread\u4e92\u76f8\u8c03\u7528\u5bf9\u65b9\u7684join()\n* \u591a\u4e2athread...\n\n##### 2. guideline\n* do not wait for another thread if therer is a chance it's waiting for you.\n\n###### 2.1 \u4e00\u4e2a\u7ebf\u7a0b\u53ea\u7528\u4e00\u4e2a\u9501\n* \u5982\u679c\u9700\u7528\u591a\u4e2a\u9501\uff0c\u7528std::lock()\u4e00\u6b21\u6027\u83b7\u53d6\n\n###### 2.2 avoid calling user-supplied code while holding a lock\n\n###### 2.3 acquire locks in a fixed order\n* \u82e5 \u9700\u8981\u9501\u591a\u4e2a \u4e14 \u4e0d\u53ef\u901a\u8fc7std:lock()\u4e00\u6b21\u9501\u5b8c\u3002\u5219\u6700\u597d\u5728\u6240\u6709\u7ebf\u7a0b\u4e2d \u4ee5\u540c\u6837\u987a\u5e8f\u9501\u4ed6\u4eec\u3002\n\n##### 2.4 use a lock hierarchy\n* \u7ed9mutex\u4e00\u4e2alayer number,\u8bb0\u5f55\u5f53\u524d\u7ebf\u7a0b\u6240\u6709\u5df2\u9501mutex\u7684layer number\u3002\n\t* it is not permitted to lock that mutex if it already holds a lock from a lower layer.\n\n###### user-defined lock\n* implement three member functions\n\t* lock()\n\t* unlock()\n\t* try_lock()\n\t\t* if the lock on the mutext is held by another thread. it returns false rather than waiting until the calling thread can acquire the lock on the mutex.\n\t\n```\nclass hierarchical_mutex {\n    std::mutex internal_mutex;\n    \/\/ \u8fd9\u4e2amutex\u6240\u5c5e\u7684\u5c42\u7ea7\n    unsigned long const hierarchy_value;\n    \/\/ \u4e0a\u4e00\u6b21\u9501\u8fd9\u4e2amutex\u7684\u7ebf\u7a0b\u5c42\u7ea7\n    unsigned long previous_hierarchy_value;\n    \/\/ \u5f53\u524d\u7ebf\u7a0b\u7684\u5c42\u7ea7\n    static thread_local unsigned long this_thread_hierarchy_value;\n    void check_for_hierarchy_violation() {\n        if (this_thread_hierarchy_value &lt;= ierarch_value) {\n            throw std::logic_err(\"mutext hierarchy violated\");\n        }\n    }\n    void update_hierarch_value() {\n        previous_hierarchy_value = this_thread_hierarchy_value;\n        this_thread_hierarchy_value = hierarchy_value;\n    }\npublic:\n    explicit hierarchical_mutex(unsigned long value): \n    hierarchy_value(value),\n    previous_hierarchy_value(0) {}\n    void lock() {\n        check_for_hierarchy_violation();\n        internal_mutex.lock();\n        update_hierarchy_value();\n    }\n    void unlock() {\n        this_thread_hierarchy_value = previous_hierarchy_value;;\n        internal_mutex.unlock();\n    }\n    bool try_lock() {\n        check_for_hierarchy_violation();\n        if (!internal_mutex.try_lock()) \n            reutrn false;\n        update_hierarchy_value();\n        return true;\n    }\n};\nthread_local unsigned long hierarchical_mutex::this_thread_hierarchy_value(ULONG_MAX);\n```\n\n###### Extending these guidelines beyongd locks\n* \u6269\u5c55\u5230\u5982thread\u4e92\u76f8\u7b49\u5f85\u5bf9\u65b9\u5b8c\u6210\u4e4b\u7c7b\u7684\u60c5\u51b5\u3002\n\t* a thread waits only for threads lower down the hierarchy.\n\t\t* ensure that your threads are joined in the same function that started them.  \n* it is bad idea to wait for a thread while holding a lock.\n* once you have designed your code to avoid deadlock, std::lock() and std::lock_guard cover most of the cases of simple locking.\n\t* std::unique_lock provite more flexibility. \n\n\n#### 3.2.6 Flexible locking with std::unique_lock\n* std::unique_lock(mtx, std::defer_lock)\n\t* indicate the mutex should remain unlocked on construction\n\t* can then \n\t\t* lock()\n\t\t* passing the unique_lock object itself to std::lock()\n\n```\nstd::unique_lock&lt;std::mutex> lock_a(mtx1, std::defer_lock);\nstd::unique_lock&lt;std::mutex> lock_b(mtx2, std::defer_lock);\nstd::lock(lock_a, lock_b);\n\nbool b = lock_a.owns_lock();\n```\n\n* \u53ef\u4ee5\u53ca\u65f6unlock()\u6765\u63d0\u9ad8\u6027\u80fd\u3002\n\n\n#### 3.2.7 Transferring mutex ownership between scopes\n* std::unique_ptr movable but not copyable\n* a function lock a mutex and transfer ownership of that lock to the caller.\n\n```\nunique_ptr&lt;mutex> get_lock() {\n    extern mutex mtx;\n    unique_lock&lt;mutex> lcx(mtx);\n    return lcx;\n}\nint main() {\n    unique_lock&lt;mutex> lcx(get_lock());\n    \n}\n```\n\n#### 3.2.8 Locking at an appropriate granularity\n* the lock granularity is to describe the amount of data protected by a single lock.\n* locking at an appropriate granularity is not only about the amount of data locked; it is also about how long the lock is held and what operations are performed while the lock is held.\n* a lock should be held for only the minimum possible time needed to perform the required operations.\n\t* time consuming operations such as acquiring another lock or waiting for I\/O to complete shouldnot be done while holding a lock unless absolutely necessary.\n* if you do not hold the required locks for the entire duration of an operaton, you are exposing yourself to race conditions.\n* when not all accesses to the data structure require the same level of protection, we need alternative mechanism instead of plain std::mutex\n\n### 3.3 Alternative facilities for protecting shared data\n* mutex are not the only game in town when protecting shared data. there are alternatives that provide more appropriate protection in specific scenes.\n\n#### 3.3.1 \u4ec5\u521b\u5efa\u65f6\u9700\u8981\u4fdd\u62a4\n* \u9519\u8bef\uff1aDouble-Checked Locking\n\t* \u53ef\u80fdp\u5df2\u7ecf\u88ab\u5199\uff0c\u4f46resource\u6ca1\u521d\u59cb\u5316\u597d \n\n```\nvoid f() {\n    if (!p) {\n        std::lock_guard&lt;std::mutex> lk(resource_mutex);\n        if (!p) {\n            \/\/ init resource\n        }\n    }\n}\n```\n\n###### \u6700\u4f73\u5b9e\u8df5\n* std::call_once\n\t* works with any function or callable object.\n\n```\nstd::shared_ptr&lt;resource> res_ptr;\nstd::once_flag res_flag;\nvoid init_res() {\n    res_ptr.reset(new res);\n}\nvoid f1() {\n    std::call_once(res_flag, init_res);\n    res_ptr->f2();\n}\n```\n\n* used for lazy initialization of class members.\n\n```\nclass A {\n    std::once_flag flag;\n    resource res;\n    void init_res () {\n        res = Resource();\n    }\n    void f1() {\n        std::call_once(flag, &amp;A::init_res, this);\n    }\n}\n```\n\n* std::once_flag can not be copied or moved, like std::mutex\n\n#### 3.3.3 Protecting rarely updated stata stuctures\n* if any thread has a shared lock, a thread that tries to acquire an exclusive lock will block untill all other threads have released their locks.\n* if any thread has a exclusive lock, no other thread may acquire a shared or exclusive lock untill the first thread has released its lock.\n\n```\n#include &lt;map>\n#include &lt;string>\n#include &lt;mutex>\n#include &lt;boost\/thread\/shared_mutex.hpp>\nclass dns_entry;\nclass dns_cache {\n    std::map&lt;std::sring, dns_entry> entries;\n    mutable boost::shared_mutex entry_mutex;\npublic:\n    dns_entry find_entry(std::string const &amp; domain) const {\n        boost::shared_lock&lt;boost::shared_mutex> lk(entry_mutex);\n        std::map&lt;std::string, dns_entry>::const_iterator const it = entries.find(domain);\n        return (it == entries.end()) ? dns_entry() : it->second;\n    }\n    void update_or_add_entry(std::string const &amp; domain, dns_entry const &amp; dns_details) {\n        std::lock_guard&lt;boost::shared_mutex> lk(entry_mutex);\n        entries&#91;domain] = dns_details;\n    }\n};\n```\n\n#### 3.3.3 Recursive locking\n* std::recursive_mutex\n\t* can acquire multiple locks on a single instance from the same thread.\n\t* call unlock() as many times as calling lock() to release.\n\t\t* std::lock_guard&lt;std::recursive_mutex>\n\t\t* std::unique_lock&lt;std::recursive_mutex> \n\t* most of the time, if you think you want a recursive mutex, you probably need to change your design instead.\n\t* \u6700\u5e38\u89c1\u4f7f\u7528\u573a\u666f\uff1aquick-and-dirty:\u7ed9\u4e00\u4e2a\u7c7b\u52a0\u628a\u5927\u9501\uff0c\u6bcf\u4e2a\u6210\u5458\u51fd\u6570\u4e2d\u90fd\u5148lock\u8fd9\u4e2amutex,\u6709\u4e9b\u6210\u5458\u51fd\u6570\u4f1a\u8c03\u7528\u5176\u4ed6\u6210\u5458\u51fd\u6570\uff0c\u8fd9\u65f6\u5c31\u7528recursive_mutex. \u56e0\u4e3a\u4e00\u4e2amutex\u4e0d\u53ef\u4ee5\u88ab\u9501\u4e24\u6b21\uff0c\u6240\u4ee5\u9700\u8981recursive_mutex\n\t\t* \u4f46\u6700\u597d\u662f\u62bd\u79bb\u51fa\u8fd9\u4e24\u4e2a\u51fd\u6570\u516c\u5171\u7684\uff0c\u5f62\u6210\u53e6\u5916\u4e00\u4e2a\u4e0d\u8c03\u7528lock()\/* \u56e0\u4e3a\u8c03\u7528\u8fd9\u4e2a\u65b0\u7684\u6210\u5458\u51fd\u6570\u7684 \u6210\u5458\u51fd\u6570\u5df2\u7ecf\u8c03\u7528\u8fc7lock() *\/\u7684private\u6210\u5458\u51fd\u6570\u3002 \n\n## chapter4 Synchronizing concurrent operatons\n* covers: \n\t* waiting for an event.\n\t* waiting for one-off events with futures\n\t* waiting with a time limit\n\t* using synchronization of operations to simplify code\n\n* \u5e38\u5e38\u4e00\u4e2athread\u9700\u8981\u7b49\u5f85\u53e6\u5916\u4e00\u4e2athread\n\t* \u867d\u53ef\u7528shared data\u6cd5\uff1a\u7c7b\u4f3cwhilte(!finished);\n\t* \u6700\u4f73\u5b9e\u8df5\uff1a\n\t\t* condition variables\n\t\t* futures \n\n### 4.1 Waiting for an event or other conditions\n* the most basic mechanism for waiting for an event to be triggered by another thread is the condition variable\n\t* conceptually, a condition variable is associated with some event or other condition\n\t\t* and one or more threads can wait for that condition to be satidfied.\n\t\t* when some thread has determined that the condition is satisfied, it can then notify one or more of the thread waiting on the condition variable.\n\n#### 4.1.1 Wating for a condition with condition variable\n###### std header condition_variable \n* provide:\n\t* std::condition_variable\n\t* std::condition_variable_any \n* they need work with a mutex in order to provide appropriate synchronization\n\t* std::condtion_variable is limited to working with std::mutex\n\t* std::condition_variable can work with anything that meets some minimal criteeria for being mutex-like\n\n```\n#include &lt;condition_variable>\n#include &lt;mutex>\n#include &lt;queue>\n\nclass data_chunk;\nstd::mutex mut;\nstd::queue&lt;data_chunk> data_queue;\nstd::condition_variable data_cond;\ndata_chunk prepare_data();\n\nbool more_data_to_prepare();\nvoid data_preparation_thread() {\n    while (more_data_to_prepare()) {\n        data_chunk const data = prepare_data();\n        std::lock_guard&lt;std::mutex> lk(mut);\n        data_queue.push(data);\n        data_cond.notify_one();\n    }\n}\n\nvoid data_processing_thread() {\n    while (true) {\n        std::unique_lock&lt;std::mutex> lk(mut);\n        data_cond.wait(lk, &#91;]{ return !data_queue.empty()});\n        data_chunk data = data_queue.front();\n        data_queue.pop();\n        lk.unlock();\n        process(data);\n        if (is_last_chunk(data))\n            break;\n    }\n}\n```\n\n* using a queue to transfer data between threads is a common scenario\n\n#### 4.1.2 Building a thread-safe queue with condition variables\n\n* \u88abmutable\u4fee\u9970\u7684\u6570\u636e\u6210\u5458\uff0c\u53ef\u4ee5\u5728const \u6210\u5458\u51fd\u6570\u4e2d\u4fee\u6539\n\t* \u5728const\u51fd\u6570\u4e2d\u7528\u5230 mutex.lock() \u5219\u9700\u8981\u628a\u7c7b\u6210\u5458\u7684mutex\u58f0\u660e\u4e3amutable std::mutex mtx; \n\n* if the waiting thread is going to wait only once, so when the condition is true it will never wait on this condition variable again, a condition variable might not be the best choice of synchronization mechanisms. This is especially true if the condition being waited for is the availability of a particular piece of data. In this scenario, a *future* might be more appropriate.\n\n### 4.2 Waiting for one-off events with futures.\n* header *future*\n###### future\u7c7b\u578b\uff1a\n* std::future&amp;lt;&amp;gt;\n\t* an instance of std:future is the one and only insttance that refers to its associated event\n\t* .get()\u53ea\u80fd\u8c03\u7528\u4e00\u6b21\uff0c\u8c03\u7528\u7b2c\u4e8c\u6b21\u5c31\u4f1athrow std::future_error: No associated state.\n\n* std::shared_future&amp;lt;&amp;gt;\n\t* multiple instance of std::shared_future may refer to the same event.\n\t* all instance will become ready at the same time, and they may all access any data associated with the event.\n\n* the template parameter is the type of the associated data.\n\t* std::future&amp;lt;void&amp;gt; \u3001std::shared_future&amp;lt;void&amp;gt; indicate there is no associated data.\n\n* if multiple threads need to access a single future object, they must protect access via a mutext or other synchronization mechanism.\n\t* but multiple threads may each access their own copy of a std::shared_future&amp;lt;&amp;gt; without future synchronization, even if they all refer to the same asynchronous result.\n* the most basic of one-off events is the result of a calculation that has been run in the background.\n\n#### 4.2.1 Returning values from background tasks\n* use std::async to start an asynchronous task for which you do not need the result right away.\n\t* when you need the value ,you just call get() on the future, and the thread blocks until the future is reawdy and then returns the value.\n\t* if arguments are rvalues, the copies are created by moving the originals. This allows the use of move-only types as both the function object and the arguments.\n\n```\nT f1(arg1, arg2);\nstd::future&amp;lt;T&amp;gt; fu = std::async(f1, arg1, arg2);\n\n\nclass A {\npublic:\n    int f2(arg1, arg2);\n    int operator() (double);\n};\nA a;\nstd::future&amp;lt;T&amp;gt; fu = std::async(&amp;amp;A::f2, &amp;a, arg1, arg2);\nstd::future&amp;lt;T&amp;gt; fu = std::async(&amp;amp;A::f2, ref(a), arg1, arg2);\n\/\/ call temp_a.f2(arg1, arg2);\nstd::future&amp;lt;T&amp;gt; fu = std::async(&amp;amp;A::f2, a, arg1, arg2);\n\n\/\/ call tmp_a(1.0) where temp_a is move-constructed from A();\nstd::future&amp;lt;T&amp;gt; fu = std::async(A(), 1.0);\n\/\/ call a(1.0)\nstd::future&amp;lt;T&amp;gt; fu = std::async(ref(a), 1.0);\n\n```\n\n* std::async\u989d\u5916\u53c2\u6570\n\t* \u7c7b\u578b\u662fstd::launch\n\t* std::launch::deferred\n\t\t* indicate the function call is to be deferred until either wait() or get() is called on the future.  \n\t* std::launch::async:\n\t\t* indicate the function must be run on its own thread \n\t* default is: std::launch::deferred | std::launch::async\n\t\t* indicate the implementation may choose.\n\n```\n...async(std::launch::async, A(), 1.0);\n```\n\n#### 4.2.2 Associating a task with a future\n* std::packaged_task&amp;lt;T&amp;gt; \n\t* ties a future to a function or callable object\n\t* when invoked, it calls the associated function or callable object and makes the future read, with the return value stored as the associated data.\n\t*  T \u662f \u51fd\u6570\u7b7e\u540d\n\t\t* void() -> \u65e0\u53c2\u65e0\u8fd4\u56de\u503c\u51fd\u6570\n\t\t* int(std::string&amp;, double&amp;)\n\t* \u6784\u5efa\u65f6\uff0c\u5fc5\u987b\u4f20\u5165 \u51fd\u6570\u6216callable object\n\t\t* \u4f20\u5165\u7684\u8fd9\u4e2a \u7684\u7b7e\u540d\u4e0d\u9700\u8981\u548cT\u5b8c\u5168\u4e00\u81f4\uff0c\u53ea\u9700\u8981\u53ef\u81ea\u52a8\u8f6c\u6362\u5373\u53ef\u3002\n\t* packaged_task&amp;it;&amp;gt;\u88ab\u5f04\u6210callable object, \u5b9e\u73b0\u4e86operator()\u3002\u5185\u542bstd::future&amp;it;T1&amp;gt;, void operator(arg1...);\n\t\t* std::packaged_task&amp;it;T1(arg1...)&amp;gt; \n\t\t* can be wraped in a std::function object;\n\t\t* can be passed to a std::thread\n\t\t* can be invoked directly     \n* \u7528\u4e8e\uff1a\n\t* used as building block for thread pools or other task managent schemes\n\t\t* running each task on its own thread\n\t\t* running tasks all sequentially on a particular background thread.\n\t* if a large operation can be divided into self-contained sub-tasks, each of these can be wrapped in a std::packaged_task&amp;lt;&amp;gt; instance, and then that instance passed to the task scheduler or thread pool.\n\t\t* this abstracts out the details of the tasks\n\t\t* the scheduler just deals with std::packages_task&amp;lt;&amp;gt; rather than individual function. \n\n##### 4.2.1 Passing tasks between threads\n* Many GUI frameworks require that updates to the GUI be done from specific threads, so if another thread need to update the GUI, it must send a message to the right thread in order to do so.\n\t* std::packaged_task provides one way of doing this without requiring a custom mesasge ... \n\n```\n\/\/ Running code on a GUI thread using std::packaged_task\n#include &lt;deque>\n#include &lt;mutex>\n#include &lt;future>\n#include &lt;thread>\n#include &lt;utility>\n\nstd::mutex m;\nstd::deque&lt;std::packaged_task&lt;void()>> tasks;\nbool gui_shutdown_message_received();\nvoid get_and_process_gui_message();\n\nvoid gui_thread() {\n    while (!gui_shutdown_message_received()) {\n        get_and_process_gui_message();\n        std::packaged_task&lt;void()> task;\n        {\n            std::lock_guard&lt;std::mutex> lk(m);\n            if (tasks.empty())\n                continue;\n            task = std::move(tasks.front());\n            tasks.pop_front();\n        }\n        task();\n    }\n}\nstd::thread gui_bg_thread(gui_thread);\n\ntemplate&lt;typename Func>\nstd::future&lt;void> post_task_for_gui_thread(Func f) {\n    std::packaged_task&lt;void()> task(f);\n    std::future&lt;void> res = task.get_future();\n    std::lock_guard&lt;std::mutex> lk(m);\n    tasks.push_back(std::move(task));\n    return res;\n}\n```\n\n#### 4.2.3 Making std::promises\n* \u7528\u4e8e\uff1a\n\t* those tasks can not be expressed as a simple function call\n\t* those tasks where the result may come from more than one place\n\n* std::promise&amp;lt;T&amp;gt; provides a means of setting a value, which can laer be rad through an associated std::future&amp;lt;T&amp;gt; object;\n\n#### 4.2.4 Saving an exception for the future\n\n```\nfuture&lt;int> fu = async(f1, -1);\n\/\/ \u82e5f1(-1) throw exception\n\/\/ \u5219fu.get() rethrow exception\nfu.get()\n```\n\n* rethrow\u7684exption\u53ef\u80fd\u662f\u539f\u5148\u7684,\u4e5f\u53ef\u80fd\u662f\u4e2acopy\uff0c\u89c6\u7f16\u8bd1\u5668\u5b9e\u73b0\n* \u5728package_task\u3001promise\u4e5f\u662f\u4e00\u6837\n* prmise\u53efset_exception()\n\n```\neg1:\ntry {\n    pro.set_value(calculate_value());\n} catch(...) {\n    pro.set_exception(std::current_exception);\n}\n\neg2: \n\/\/ \u4e00\u76f4\u9519\u8bef\u5219\u663e\u793a\u8bbe\u7f6e\n\/\/ \u4ee3\u7801\u66f4\u7b80\u6d01\u3001\u6709\u5229\u4e8e\u7f16\u8bd1\u5668\u4f18\u5316\npro.set_exception(std::copy_exception(std::logic_error(\"foo \")))\n```\n\n* \u82e5\n\t* future\u76f4\u5230\u9500\u6bc1\u65f6\u8fd8\u6ca1set\n\t* packaged_task\u76f4\u5230\u9500\u6bc1\u65f6\u8fd8\u6ca1\u8c03\u7528\u8fc7\n\t* \u5219future.get()\u5f97\u5230\n\t\t* std::future _error\u7c7b, error code\u4e3a\n\t\t\t* std::future_errc::broken_promise &amp;lt;- \u5bf9\u4e8efuture\n\n#### 4.2.5 Waiting from multiple threads\n* \u591a\u4e2a\u7ebf\u7a0b\u7b49\u4e00\u4e2a\u4e8b\u4ef6\u7528 std::shared_future\n* std::future is only *moveable*\n* std::shared_future are copyable\n\t* can have multiple objects referring to the same associated state\n\t* \u53ef\u4ee5\u591a\u6b21get()\u3002\n\t\t* \u540c\u4e00\u7ebf\u7a0b\u591a\u6b21get()\n\t\t* \u4e0d\u540c\u7ebf\u7a0b\u591a\u6b21get() \n\n* std::shared_future's member functions are unsynchronized, to avoid data races when accessing a single object from multithread:\n\t* \u6700\u4f73\u5b9e\u8df5\u662f \u590d\u5236shared_future\n\n* \u7528\u5904\uff1a\n\t* \u7535\u5b50\u8868\u683c\u4e2d\uff0c\u5148\u5e76\u884c\u8ba1\u7b97\u3010\u503c\u4e0d\u4f9d\u8d56\u4e8e\u5176\u4ed6\u683c\u5b50\u3011\u7684\u5355\u5143\u683c\uff0c\u518d\u8ba1\u7b97\u3010\u503c\u4f9d\u8d56\u5176\u4ed6\u683c\u5b50\u3011\u7684\u5355\u5143\u683c\u3002\u53ef\u4ee5\u7528shared_future reference \u7b2c\u4e00\u79cd\u5355\u5143\u683c\n\n* \u7531std::future\u521d\u59cb\u5316std::shared_future\n\t* \u7528\u53f3\u503c\u521d\u59cb\u5316\u5219\u4e0d\u9700\u8981\u663e\u793amove \n\n```\nstd::shared_future&lt;int> sf (std::move(fu));\nstd::shared_future&lt;int> sf(promise_a.get_future());\nauto shared_fu = promise_a.get_future().share();\n```\n\n### 4.3 Waiting with a time limit\n* \u573a\u666f\n\t* \u5bf9\u4e00\u6bb5\u4ee3\u7801\u6267\u884c\u591a\u4e45\u6709\u65f6\u95f4\u9650\u5236\n\t* \u4e8b\u4ef6\u5c1a\u672a\u53d1\u751f\u524d\uff0c\u6709\u5176\u4ed6\u4e8b\u60c5\u8981\u505a\n* \u4e24\u79cd\u7b49\u5f85\n\t* \u7b49\u4e00\u6bb5\u65f6\u95f4\n\t\t* _for \u540e\u7f00  \n\t* \u7b49\u5230\u67d0\u4e2a\u65f6\u95f4\u70b9\u4e4b\u524d\n\t\t* _until \u540e\u7f00\n\t* \u8fd9\u4e24\u79cd\u7b49\u5f85\uff0c\u90fd\u53ef\u56e0\u4e3a\u4e8b\u4ef6\u53d1\u751f\u800c\u63d0\u524d\u9192\u6765\u3002\n\n#### 4.3.1 Clocks    \n* now() : return current time of a clock.\n\t* std::chrono::system_clock::now() -&amp;gt;return current time of the system clock.\n* time_point : the type of the time points.\n\t* the type of clock_a::now() is clock_a::time_point\n\n* tick period of clock is a fractionnal number of seconds\n\t* \u82e5\u53ea\u6709\u8dd1\u8d77\u6765\u624d\u77e5\u9053\uff0c\u5219peroid\u53ef\u80fd\u662f\u5e73\u5747\u503c\/\u6700\u5c0f\u503c\/\u5176\u4ed6\uff0c\u89c6\u5199\u5e93\u7684\u4eba\u800c\u5b9a\u3002\n\n* steady clock\n\t* clock tikcks at a uniform rate, whether or not matches the period, and can not be adjusted\n\t* the *is_steady* static data member of the clock class is true if the clock is steady.\n\t* std::chrono::system_clock \u4e0d\u662f steady clock\n\t\t* \u53ef\u80fd\u5bfc\u81f4now()\u8fd4\u56de\u4e00\u4e2a \u6bd4\u4e4b\u524d\u8fd4\u56de\u7684 \u66f4\u65e9\u7684\u3002\n\t\t* represents the 'real time' clock of the system\n\t\t* provides functions for converting its time points to and from time_t values\n\t* std::chrono::steady_clock is steady clock \n\n* std::chrono::hight_resolution_clock\n\t* provides the smallest possible tick period ( and thus the highest possible resolution) of all the library-supplied clocks\n\n#### 4.3.2 Durations\n* std::chrono::duration&amp;lt;T1,T2&amp;gt;\n\t* T1 : such as int, long, double\n\t* T2 : std::ratio&amp;lt;x1,x2&amp;gt;\n\t\t* x1 \/ x2 = \u3010how many seconds each unit of the duration represents\u3011 \/ 1s\n\t* \u4e3e\u4f8b\uff1a\n\t\t* a number of minutes sotred in a *short* is std::chrono::duration&amp;lt;short, std::ratio&amp;lt;60, 1&amp;gt;&amp;gt;     \n\t\t* a number of milliseconds stored in a double is std::chrono::duration&amp;lt;double, std::ratio&amp;lt;1,1000&amp;gt;&amp;gt;;\n\n###### predefined:\n* nanoseconds\n* microseconds\n* milliseconds\n* seconds\n* minutes\n* hours\n\n###### \u8f6c\u6362\n* \u5728\u4e0d\u9700\u8981\u622a\u65ad\u65f6\uff0cduration\u4e4b\u95f4\u53ef\u9690\u5f0f\u8f6c\u6362\n\t* \u53efhours\u5230seconds\n\t* \u4e0d\u53efseconds\u5230hours\n* \u5f3a\u5236\u8f6c\u6362 std::chrono::duration_cast&amp;lt;&amp;gt; \n\t* \u7ed3\u679c\u662f\u622a\u65ad\u7684\n\n###### \u7b97\u6570\u8fd0\u7b97\n* \u53ef\u52a0\u51cf\u4e58\u9664\n* .count()\u8fd4\u56decount of the number of units in the duration\n\n```\nstd::chrono::millliseconds(1234).count() -> 1234\n```\n\n###### std::future\u76f8\u5173\n* wait functions all return a status indeicate \u3010wait timed out\u3011\/\u3010waited for event occurred\u3011\n\t* std::future_status::timeout\n\t* std::future_status::ready\n\t* std::future_status::deferred\n* \u53ef\u80fd\u56e0\u4e3a \u7cfb\u7edf\u8c03\u5ea6\u3001os\u65f6\u949f\u7cbe\u5ea6\u53d8\u5316\u3001 \u4e4b\u7c7b\u7684\u539f\u56e0\u5bfc\u81f4\u65f6\u95f4\u957f\u7684\u591a \n\n```\nstd::future&lt;int> f = std::async(f1);\nif (f.wait_for(std::chrono::milliseconds(35)) == std::future_status::ready) {\n\n}\n```\n\n#### 4.3.3 Time points\n###### std::chrono::time_point&amp;lt;&amp;gt;\n* std::chrono::time_point&amp;lt;T1,T2&amp;gt;\n\t* T1: the clock it refers to\n\t* the unitf of measurement(\u5c5e\u4e8estd::chrono::duration&amp;lt;&amp;gt;)\n* .time_since_epoch() \n\t* \u8fd4\u56deduration\n* \u4f8b\u5b50:\n\t* std::chrono::time_point&amp;lt;std::chrono::system_clock, std::chrono::minutes&amp;gt;  \n* \u8fd0\u7b97\n\t* \u4e0eduration\n\t\t* std::chrono::hight_resolution_clock::now() + std::chrono::nanoseconds(500)  \n\t* \u4e0etime_point\n\t\t* \u548cclock\u76f8\u540c\u7684time_point\u76f8\u51cf\u8fd4\u56deduraion  \n\n```\n    auto start = std::chrono::high_resolution_clock::now();\n    f1(1);\n    auto end = std::chrono::high_resolution_clock::now();\n    auto x = std::chrono::duration_cast&lt;std::chrono::seconds>(end-start).count();\n    \n```\n\n```\ncondition_variable cv;\nbool done;\nmutex m;\nbool wait_loop() {\n    auto const timeout = std::chrono::steady_clock::now() + std::chrono::milliseconds(500);\n    unique_lock&lt;mutex> lk(m);\n    \/\/ you'ke\n    while (!done) {\n        if (cv.wait_until(lk, timeout) == cv_status::timeout) {\n            break;\n        }\n    }\n    return done;\n}\n```\n\n#### 4.3.4 Functions that accepts timeouts\n* std::this_thread::sleep_for()\n* std::this_thread::sleep_until()\n* std::condition_variable\n* std::condition_variable_any\n* std::future\n* std::shared_future\n* \u4e00\u4e9b\u4e92\u65a5\u4f53\n\t* std::timed_mutex\n\t* std::recursive_timed_mutex\n\t* \u90fd\u652f\u6301\n\t\t* try_lock_for()\n\t\t* try_lock_until()\n* std::unique_lock&amp;lt;TimedLockable&amp;gt; \n\n### 4.4 Using synchronization of operations to simplify code\n* One way this can help simplify your code is that it accommodates a muchmore functional (in the sense of functional programming) approach to programming concurrency. Rather than sharing data directly between threads, each task can be provided with the data it needs, and the result can be disseminated to any other threadsthat need it through the use of futures.\n\n#### 4.4.1 Functional programming with futures\n* functional programming\u6307\u7ed3\u679c\u4ec5\u53d6\u51b3\u4e8e\u53c2\u6570\uff0c\u800c\u4e0d\u4f9d\u8d56\u4efb\u4f55\u5916\u90e8\u72b6\u6001\n* pure function\u4e0d\u6539\u4efb\u4f55\u5916\u90e8\u72b6\u6001\n* pure function\u9002\u5408\u591a\u7ebf\u7a0b\n\t* \u56e0\u4e3a\u6ca1\u4e86shared data\uff0c\u4e0d\u518d\u9700mutex\n\t* Haskell\u9ed8\u8ba4\u51fd\u6570\u90fd\u662fpure function\n\n* c++11\u4e5f\u9002\u5408pure function\u5f0f\u7f16\u7a0b\n\t* lambda\n\t* std::bind\n\t* automatic type deduction for variables\n\t* future\n\t\t* future can be passed around between threads to allow the result of one computaions to depend on the result of another, without any explicit access to shared data.\n\n###### FP-style quicksort\n\n#### 4.4.2 Synchronization operations with message passing\n* CSP: Communicating Sequential Processes\n\t* \u7ebf\u7a0b\u95f4\u65e0shared data, \u4ec5\u53ef\u901a\u8fc7channel\u901a\u8baf\n\t\t* \u901a\u8fc7mesage queue\u4e5f\u53ef\n\t\t* \u6bcf\u4e2a\u7ebf\u7a0b\u7b49\u5f85\u6d88\u606f\uff0c\u63a5\u7740\u5904\u7406\u7b49\u5230\u7684\u6d88\u606f\u3002\n\t* \u4ea6\u79f0\u4e3a   \n\n\n### 4.5 Summary\n\n## Chapter5 The C++ momory model and operations on atomic types\n* covers:\n\t* the details of the c++ 11 memory model\n\t* the atomic types provides by the c++\n\t* the operations that are available on those atomic types\n\t* how those operations can be used to provide synchronization between threads\n\n* c++\u7684atomic types\u3001atomic operations\u63d0\u4f9b\u5e95\u5c42\u540c\u6b65\u8bbe\u65bd\u3002\u8fd9\u4e9b\u4e1c\u897f\u88ab\u8f6c\u6362\u4e3a\u4ec5\u4ec5\u4e00\u4e24\u6761\u6307\u4ee4\u3002\n\n### 5.1 Memory model basics\n#### 5.1.1 Objects and memory locations\n* important things:\n\t* every variable is an object, including those that are members of other objects.\n\t* every object occupies at least one memory location.\n\t* Variables of fundamental type are exactly one memory location, whatever their size, even if ther are adjacent or part of an array.\n\t* Adjacent bit fields are part of the same memory location\n\n#### 5.1.2 Objects, memory locations, and concurrency\n* \u4e3a\u4e86\u907f\u514d\u7ade\u4e89\uff1a\n\t* \u7528mutex\n\t* use the synchronization properties of *atomic* operations\n\n#### 5.1.3 Modification orders\n\n### 5.2 Atomic operations and types in c++\n#### 5.2.1 The standard atomic types\n* &amp;lt;atomic&amp;gt;\n* .is_lock_free()\n\t* return true. operations are done with atomic instructions\n\t* return false. \u5185\u90e8\u7528\u4e86\u9501\u3002\n\t* \u53ea\u6709std::atomic_flag\u6ca1\u6709\u8fd9\u4e2a\u51fd\u6570\uff0cstd::atomic_flag: \n\t\t* bool\u539f\u5b50\u7c7b\u578b\n\t\t* \u662flock free\n\t\t* \u53ef\u7528\u6765\u5b9e\u73b0\u7b80\u5355\u7684\u9501\uff0c\u5e76\u8fdb\u800c\u5b9e\u73b0\u6240\u6709\u5176\u4ed6\u539f\u5b50\u7c7b\u578b\n\t\t* .test_and_set()\uff0c\u8fd4\u56de\u5f53\u524d\u503c\uff0c\u63a5\u7740\u8ba9\u503c\u4e3atrue\u3002\n\t\t* .clear()\uff0c\u8ba9\u503c\u4e3afalse\n\t\t* \u518d\u4e0d\u53ef\u590d\u5236\u3001\u4e0d\u53ef\u8d4b\u503c\u3001\u4e0d\u53eftest_and_clear\u3001\u518d\u6ca1\u6709\u5176\u4ed6\u64cd\u4f5c\u4e86\n\n* std::atomic&amp;lt;&amp;gt;\n\t* \u5728\u6700\u6d41\u884c\u7684\u5e73\u53f0\u4e0a\uff0c\u5185\u7f6e\u7c7b\u578b\u7684\u539f\u5b50\u7c7b\u578b\u5f88\u6709\u53ef\u80fd\u662flock-free\uff0c\u4f46\u4e0d\u8981\u6c42\n\t* not in converntional sense :\n\t\t* copyable, no copy constructors, no copy assignment operators\n\t\t* assignable\n\t* \u6210\u5458\u51fd\u6570 \n\t\t* load()\n\t\t* store()\n\t\t* exchange()\n\t\t* compare_exchange_weak()\n\t\t* compare_exchange_string()\n\t\t*  +=, -=, *=, |=\n\t\t*  integral types and \u3010std::aotmic&amp;lt;&amp;gt; specializations for pointers\u3011 \u652f\u6301 ++, --\n\t\t*  fetch_add\n\t\t*  fetch_or\n\t\t*  ...\n\t* \u8d4b\u503c\u8fd0\u7b97\u7b26\u548c\u6210\u5458\u51fd\u6570 \u8fd4\u56de \u6539\u540e\/\u6539\u524d\u7684\u503c\u3002\n\t\t* \u907f\u514d\u6539\u540e \u4e0e \u8bfb\u4e4b\u95f4\u503c\u88ab\u4fee\u6539\u3002\n\t* \u4e5f\u53ef\u7528\u4e8euser-defined\u7c7b\u578b\u3002\u4f46\u64cd\u4f5c\u4ec5\u9650\u4e8eload()\u3001store()\u3001exchange()\u3001compare_exchange_weak()\u3001compare_exchange_string()\n\t* each of the operations on the atomic types has an optional memory-ordering argument\u3002\n\t* \u64cd\u7eb5\u5206\u4e3a\u4e09\u7c7b\n\t\t* store -&amp;gt; memory_order_relaxed, memory_order_release, memory_order_sqt_cst\n\t\t* Load\n\t\t* Read-modify-write\n\t\t* \u9ed8\u8ba4memory-ordering\u4e3amemory_order_sqt_cst\n\t\n\n* \u5176\u4ed6\u540d\u79f0\n\t* atomic_bool\n\t* atomic_char\n\t* atomic_schar\n\t* atomic_uchar\n\t* ...\n\t* \u5176\u4ed6\u540d\u79f0\u7684\u547d\u540d\u6a21\u5f0f: \n\t\t* a standard typedef T\u3001built-in types T(\u4f46signed\u7f29\u5199\u4e3as, unsigned\u7f29\u5199\u4e3au, long long\u7f29\u5199\u4e3allong)\n\t\t\t* atomic type: atomic_T\n\t* \u76f4\u63a5\u7528std::atomi&amp;lt;T&amp;gt; \n\n#### 5.2.2 Operations on std::atomic_flag\n* its basic and is intendes as a building block only.\n* \u4e00\u822c\u90fd\u7528\u4e0d\u5230\n* \u5fc5\u987b\u521d\u59cb\u5316\u6210\u8fd9\u6837\uff1a std::atomic_flag f = ATOMIC_FLAG_INIT;\n* \u4e00\u5b9a\u662flock-free\n* \u6210\u5458\u51fd\u6570\n\t* clear()\n\t* test_and_set() \n\n```\nclass spinlock_mutex {\n    std::atomic_flag flag;\npublic:\n    spinlock_mutex() : flag(ATOMIC_FLAG_INIT) {}\n    void lock() {\n        while (flag.test_and_set(std::memory_order_acquire));\n    }\n    void unlock()  {\n        flag.clear(std::memory_order_release);\n    }\n};\n```\n\n\n#### 5.2.3 Operations on std::atomic&amp;lt;bool&amp;gt;\n* \u53ef\u4ee5bool\u503c\u6784\u9020\uff0c\u53ef\u8d4b\u503cbool\u503c\n\n```\nstd::atomic&lt;bool> b(true);\nb = false; \/\/ \u8fd9\u4e2a\u8868\u8fbe\u5f0f\u8fd4\u56debool\u503c\uff0c\u800c\u4e0d\u662fb\u7684\u5f15\u7528\n```\n\n* \u539f\u5b50\u7c7b\u578b\u7684\u8d4b\u503c\u8868\u8fbe\u5f0f\u8fd4\u56de\u503c\u800c\u4e0d\u662f \u53d8\u91cf\u7684\u5f15\u7528\n\n```\n\u5982\u8fd9\u79cd\uff0c\u90a3\u4e48\u5c31\u662f\u62ec\u53f7\u91cc\u662f\u8fd4\u56defalse\u3002\nif (b=false) {\n\n}\n\u82e5\u8fd4\u56de\u5f15\u7528\uff0c\u5219\u8fd8\u9700\u8981\u4eceb\u4e2d\u8bfb\u4e00\u4e0b\uff0c\u4ece\u800c\u5bfc\u81f4\u6709\u53ef\u80fd\u8bfb\u51fa\u7684\u662f\u3010\u88ab\u5176\u4ed6\u7ebf\u7a0b\u518d\u6b21\u4fee\u6539\u540e\u7684b\u3011\n```\n\n* void store(new_value)\n* .exchange(new_value) -&amp;gt; \u8fd4\u56de\u4fee\u6539\u524d\u7684\u503c\n* bool load(optional_memory_order)\n\n###### storing a new value or not depending on the current value\n* bool compare_exchange_weak(T&amp;amp; arg1, val)\n* bool compare_exchange_strong()\n* \u82e5\u5f53\u524d\u503c=arg1\uff0c\u5219\u3010\u5f53\u524d\u503c=val\u3011\uff0c\u5426\u5219\u3010arg1=\u5f53\u524d\u503c\u3011\n* \u82e5\u5f53\u524d\u904d\u5386\u88ab\u66ff\u6362\uff0c\u5219true,\u5426\u5219false\n* compare_exchange_weak()\u53ef\u80fd \u5047\u7684\u4e0d\u7b49\n\t* \u56e0\u673a\u5668\u7f3a\u5c11\u76f8\u5e94\u7684\u539f\u5b50\u64cd\u4f5c\n\t* \u56e0\u6b64\u53ef\u80fd\u7684\u60c5\u51b5\u662f\uff1a\u8fd4\u56defalse, \u4e14 \u5f53\u524d\u503c\u548carg1\u90fd\u6ca1\u88ab\u66ff\u6362\n\t* it must typically be used in a loop\n\n```\nbool expected = false;\nextern atomic&lt;bool> b;\nwhile(!b.compare_exchange_weak(expected, true) &amp;amp;&amp;amp; \uff01expected);\n```\n\n* compare_exchange_strong()\n\t* \u8fd4\u56defalse\u65f6\uff0c\u4e00\u5b9a\u662f\u56e0\u4e3a\u4e0d\u7b49\n\n* \u82e5\u4e0d\u8bba\u5982\u4f55\u90fd\u60f3\u6539\u53d8\u5f53\u524d\u904d\u5386(\u65b0\u503c\u53ef\u80fd\u4e0e\u5f53\u524d\u503c\u6709\u5173)\n\t* \u8ba1\u7b97\u65b0\u7684\u503c\u8017\u65f6\uff0c\u5219\u7528compare_exchange_string()\u66f4\u597d\n\t* \u8ba1\u7b97\u65b0\u7684\u503c\u4e0d\u8017\u65f6\uff0c\u7528_weak()\u66f4\u597d\u3002\u56e0\u4e3a_weak()\u53ef\u80fd\u5047fail\n\n* success\u3001fail\u65f6\u7684memory order\n\n\n#### 5.2.4 Operations on std::atomic&amp;lt;T*&amp;gt;\n*  T* fetch_add()\n\t* \u8fd4\u56de\u6539\u4e4b\u524d\u7684\n\n```\nFoo arr&#91;5];\nstd::atomic&lt;Foo*> p (arr);\nFoo* x = p.fetch_add(2);\nassert(x == arr);\n```\n\n* memory_orrder\u76f8\u5173\n\n#### 5.2.5 Operations on standard atomic integral types\n* \u4e0d\u652f\u6301\u4e58\u9664\n* typically used either as counters or as bitmasks\n* ++x, --x : \u8fd4\u56deT\uff0c\u800c\u975eatomic&amp;lt;T&amp;gt;\n\n#### 5.2.6 Operations on std::atomic&amp;lt;UDT&amp;gt; primary class template\n* UDT\u6761\u4ef6\n\t* must have trivial copy-assignment operator\n\t\t* no virtual functions\n\t\t* no virtual base classes \n\t\t* use cimpiler-genrated copy-assignment operator\n\t* every base class and non-static data member of a user-defined type must have trivial copy-assignmnet operator\n\t* must bitwise equality comparable \n\n#### 5.2.7 Free functions for atomic operations\n* \u539f\u5b50\u7c7b\u578b\u7684\u90a3\u4e9b\u6210\u5458\u51fd\u6570\uff0c\u6709\u5bf9\u5e94\u7684c\u8bed\u8a00\u98ce\u683c\u7684\u51fd\u6570\n* std::shared_ptr&amp;lt;&amp;gt;\u4e5f\u6709\u539f\u5b50\u64cd\u4f5c\n\t* _explicit\u64cd\u4f5c\n\t* std::atomic_is_lock_free() \n\t\n```\nstd::shared_ptr&lt;my_data> p; \nstd::shared_ptr&lt;my_data> local = std::atomic_load(&amp;0) \nstd::atomic_store(&amp;p, local);\n```\n### 5.3 Synchronizing operations and enforcing ordering\n#### 5.3.1 The synchronizes-with relationship\n* only get between operations on atomic types\n\n#### 5.3.2 The happens-before relationship\n#### 5.3.3 Memory ordering for atomic operations\n* \u516d\u79cd\n\t* memory_order_relaxed\n\t* memory_order_consume\n\t* memory_order_acquire\n\t* memory_order_release\n\t* memory_order_acq_rel\n\t* memory_order_seq_cst &amp;lt;- \u9ed8\u8ba4\n\n* \u516d\u79cd\u5206\u4e3a\u4e09\u7c7b\n\t* 1 sequentially consistent ordering\n\t\t* memory_order_seq_cst\n\t* 2 acquire-release ordering\n\t\t* memory_order_consume\n\t\t* memory_order_acquire\n\t\t* memory_order_release\n\t\t* memory_order_acq_rel\n\t* 3 relaxed ordering\n\t\t* memory_order_relaxed  \n\n\t* \u5e38\u5e381'cost > 2'cost > 3'cost\uff0c\u4f46x86\u5e73\u53f0\u5dee\u522b\u5c0f\n\n###### 1 sequentially consistent ordering\n###### 2 non-sequential consistent memory orderings\n###### 3 relaxed ordering\n* the modification order of each var is the only thing shared between threads that are using memory_order_relaxed\n###### 4 understanding relaxed ordering\n* \u628arelaxed\u7684\u539f\u5b50\u53d8\u91cf\u6bd4\u4f5c \u5c0f\u5c4b\u5b50\u91cc\u62ff\u7740\u672c\u5b50\u8bb0\u5f55\u53d8\u91cf\u503c\u7684\u4eba\n\n###### 5 acquire-release ordering\n* load are memory_order_acquire\n* store are memory_order_release\n* read-modify-write are memory_order_acquire\/memory_order_release\/memory_order_acq_rel\n* acquire\u3001realease\u4f5c\u4e3a\u4e00\u5bf9 \u4e92\u76f8\u540c\u6b65\n\t* A release operation synchronizees-with an acquire operation that reads the value written \n* cost\u6bd4sequential-consistent\u4f4e\u5f97\u591a\n\n##### 6 data dependency with acquire-release ordering and memory_order_consume \n###### memory_order_consum\u4e0edata dependency\u6709\u5173\n* \u4e24\u79cd\u5173\u7cfb\n\t* dependency-ordered-before\n\t* carries-a-dependency-to\n* \u7528\u6cd5\n\t* 1 load() with memory_order_consume\n\t* 2 store() with memory_order_release\/memory_order_acq_rel\/memory_order_seq_cst\n\t* 2 is dependency-ordered-before 1\n\n* kill_dependency(arg1)  \n\n###### 6 transitive synchronization with acquire-release ordering\n\n#### 5.3.4 Release sequences and synchronizes-with\n#### 5.3.5 Fences\n* also called *memory barrieres*\n* Fences is additional ordering constraints\n* Fences are operations that enforce memory-ordering constraints without modifying any data and typically combined with atomic operations that use memory_order_relaxed.\n* are global oeprations and affect the ordering of other atomic operations in the thread that executed the fence.\n* relaxed operations on separate variables can usually be freely reordered by the compiler or the hardware, Fences restrict this freedom and introduce happens-before and synchronizes-with relationships that were not presenet before.\n* you need a release in one thread and an acquire in another to get a synchronizes-with relationship\n* general ideas:\n\t* acquire operations sees \u3010release fence\u540e\u7684store()\u3011, \u5219 release operation synchronizes-with the axquire fence.\n\t* \u3010\u53d1\u751f\u5728acquire fence\u524d\u3011\u7684load() sees \u3010release operations\u7684\u7ed3\u679c\u3011\uff0c\u5219release operation synchronizes-with the acquire fence.\n\t* \u4e0a\u8bc9\u4e24\u4e2a\u7ed3\u5408\u8d77\u6765\uff1a\u3010\u53d1\u751f\u5728acquire fence\u524d\u3011load() sees \u3010release fence\u524d\u7684\u3011store(), \u5219 release fence synchronizes-with the acquire fence.\n\n\n#### 5.3.6 Ordering nonatomic operation with atomics\n* the real benefit to using atomic operations to enforce an ordering it that they can enforce an ordering on nonatomic operations and thus avoid the undefined behavior of a data race.\n\n### 5.4 Summary\n\n## chap6 Designing lock-based concurrent data structures\n* covers:\n\t* what it means to design data structures for concurrency\n\t* Guidelines for doing so\n\t* Example implementations of data structures designed for concurrency.\n\n###### \u5e76\u53d1\u4f7f\u7528\u7684\u6570\u636e\u7ed3\u6784\n* use a seperate mutex and external locking\n* design the data structure itself for concurrent accesss\n\n### 6.1 what does it mean to design for concurrency\n* *thread-safe*:\n\t* \u53ef\u591a\u7ebf\u7a0b\u540c\u65f6\u8bbf\u95ee\u6b64\u6570\u636e\u7ed3\u6784\uff0c\u8fd9\u4e9b\u7ebf\u7a0b\u91c7\u7528\u76f8\u540c\/\u4e0d\u540c\u7684\u64cd\u4f5c\u3002\n\t* each thread will see a self-consistent view of the data structure\n\t* no data will be lost or corrupted\n\t* all invariants will be upheld\n\t* there is no problematic race condtions\n\n* Truly designing for concurrency means: providing the opportunity for concurrency to threads acessing the data structure\n* *serialization*: threads take turns accessing the\ndata protected by the mutex; they must access it serially rather than concurrently.\n* the smaller the protected region, the fewer operations are serialized, and the greater the potential for concurrency.\n\n#### 6.1.1 Guidelines for designing data structures for concurrency\n* \u4e24\u65b9\u9762\u8003\u8651\uff1a\n\t* ensuring the accesses are safe\n\t\t* take care to avoid race conditions inherent in the interface to the data structure by providing functions for complete operations rather than for operation steps.\n\t\t* pay attention to exceptions\n\t\t* and so on \n\t\t* \u6784\u9020\/\u6790\u6784\u51fd\u6570 \u5e38\u4e0d\u53ef\u5e76\u884c\u8bbf\u95ee\u3002\u4f46\u9700\u4f7f\u7528\u8005\u786e\u4fdd\u6784\u9020\u5b8c\u6210\u524d\/\u6790\u6784\u5f00\u59cb\u540e \u4e0d\u88ab\u8bbf\u95ee\u3002\n\t\t* \u82e5\u652f\u6301 \u8d4b\u503c\/swap()\/\u3010copy construction\u3011\uff0c you need to decide whether these operations are safe to call concurrently with other operations or whether they require the user to ensure exclusive\u3002\n\t* enabling genuine concurrent access \n\t\t* can the scope of locks can be restricted to allow some parts of an operation to be performed outside the lock. \n\t\t* can different parts of the data stucture be protected with different mutexes.\n\t\t* do all operations require the same level of protection\n\t\t* can a simple change to the data structure imporve the opportunities for concurrency without affecting the operational semantics\n\n### 6.2 Lock-based concurrent data structures\n#### 6.2.1 A thread-safe stack using locks\n* locking a mutex may throw exception, but it's exceedingly rare.\n* unlocking a mutex can not fail.\n* \u81ea\u5df1\u5199\u6a21\u677f\u7c7bclass A&amp;lt;T&amp;gt;\u65f6\uff0c\u5bf9T\u6709std::move(T)\u3001new\u7b49\u64cd\u4f5cf1()\uff0c\n\t* \u5982\u679cT\u662fuser-defined, \u82e5T\u7684\u8fd9\u4e9b\u64cd\u4f5c\u53c8\u8c03\u7528\u4e86\u5f53\u524d\u8fd9\u4e2aA&amp;lt;T&amp;gt;\u7684\u65b9\u6cd5f2()\uff0c\u82e5f2()\u3001f1()\u90fd\u7528\u4e86\u7c7b\u793a\u4f8b\u7684\u90a3\u628a\u5927\u9501\uff0c\u5219\u5c31\u53ef\u80fd\u6b7b\u9501\u3002\u4f46\u8fd9\u4e9b\u53ef\u4ee5\u4e0d\u7ba1\u3001\u4ea4\u7ed9\u8c03\u7528\u8005\u81ea\u884c\u786e\u4fdd\uff0c\u56e0\u4e3a\u8c03\u7528\u8005\u8fde\u8fd9\u70b9\u90fd\u4e0d\u786e\u4fdd\u538b\u6839\u5c31\u662f\u5728\u4e71\u6574\u3002\n* \u8c03\u7528\u8005\u81ea\u884c\u786e\u4fdd\u3010\u6784\u5efa\u7ed3\u675f\u524d\u3011\/\u3010\u6790\u6784\u5f00\u59cb\u540e\u3011\uff0c\u7c7b\u5b9e\u4f8b\u4e0d\u88ab\u8c03\u7528\u3002 \n\n#### 6.2.2 A threa-safe queue using locks and condition variables\n* \u4f7f\u7528\u4e86std\u7684\u5185\u7f6e\u5bb9\u5668\n* by using the standard container you now have essentially one data item that is either protected or not . By taking control of the detailed implementation of the data structure, you can provide more fine-grained locking and thus allow a higher level of concurrency.\n\n#### 6.2.3 A thread-safe queue using fine-grained locks and condition variables.\n* in order to use finer-grained locking, you need to look inside the queue at its constituent parts and associate one mutex with each distinct data item.\n\n###### 1 enabling concurrency by separating data\n* \u5982\u8ba9\u67d0\u4e2a\u6210\u5458\u51fd\u6570\uff0c\u53ea\u6d89\u53ca\u5bf9\u4e00\u4e2a\u6210\u5458\u53d8\u91cf\u7684\u4f7f\u7528\u3002\n\n###### 2 waiting for an item to pop\n\n###### other\n* once a bounded queue is full, push will either fail or block, until an element has been popped from the queue to make room.\n* bounded queue can be useful. prevents thread(s) pupulating the queue from running too far ahead of the thread(s) reading items from the queue.\n\n### 6.3 Designing more complex lock-based data structures\n\n#### 6.3.1 Writing a thread-safe lookup table using locks\n* \u82e5\u952e\u5bf9\u5e94\u7684\u503c\u4e0d\u5b58\u5728\n\t* \u8ba9\u63d0\u4f9b\u9ed8\u8ba4\u503c\n\t* \u8fd4\u56de&amp;lt;T,bool&amp;gt;,\n\t* \u8fd4\u56de\u667a\u80fd\u6307\u9488\n* boost::shared_mutex\n\t* \u652f\u6301\u591a\u7ebf\u7a0b\u8bfb\/\u4e00\u4e2a\u7ebf\u7a0b\u5199\n\n* size_t std::hash&amp;lt;T&amp;gt;() ;\n\t* size_t -&amp;gt; unsigned integral type\t \n\t* hash tables work best with a prime number of buckets\n\n* hash table\u5185\u542b \u7c7bbucket \u6570\u7ec4\u3002\u6bcf\u4e2abucket\u5b9e\u4f8b\u90fd\u5185\u7f6e\u9501\u3002\n\n#### 6.3.2 Writing a thread-safe list using locks\n* iterator\u4f1a\u5f15\u7528\u5185\u90e8\u6570\u636e\uff0c\u4e0d\u597d\u3002\n* \u63d0\u4f9biteration function\n\t* leave it up to the user to ensure that they do not cause deadlock by acquiring locks in the user-supplied operations and do not cause data races by storing the references for access outside the locks. \n\n* \u6bcf\u4e2a\u8282\u70b9\u4e00\u628a\u9501\uff0c\u867d\u7136\u8282\u70b9\u591a\u65f6\u9501\u4e5f\u591a\n\t* node::mutex \n\t* \u91ca\u653e\u3010\u5f53\u524d\u8282\u70b9\u9501\u3011\u524d\uff0c\u9501\u4f4f\u4e0b\u4e00\u8282\u70b9\u3002\n* it is undefined behaviorto destory a locked mutex\u3002\n\t* \u901a\u8fc7\u667a\u80fd\u6307\u9488\u9500\u6bc1node\u65f6\uff0c\u9700\u8981\u5148unlock\u90a3\u4e2a\u8282\u70b9\u7684\u9501\u3002\n\t\t* \u56e0\u4e3alock\u4e86\u524d\u4e00\u4e2a\u8282\u70b9\uff0c\u6240\u4ee5\u3010unlock\u540e\u9500\u6bc1\u4e0b\u4e00\u4e2a\u8282\u70b9\u3011\u662f\u5b89\u5168\u7684\n\n### 6.4 summary\n\n## chap7 Deisigning lock-free concurrent data structures\n* covers:\n\t* implementations of data stuctures designed for concurrency without locks.\n\t* techniques for managing memory in lock-free data stuctures\n\t* simple guidelines to aid in the writing of lock-free data structures.\n\n### 7.1 Definitions and consequences\n* blocking data structures and algorithms:\n\t* that use mutexes, condition variables, and futures to synchronize the data. \n* blocking call:\n\t* call function that will suspend the execution of a thread until another thread performs an action.   \n* nonblocking data structures and algorithms\n\t* that do not use blocking call\n\t* not all are lock-free   \n\n#### 7.1.1 types of nonblocking data stuctures\n* \u524d\u6587\u7684\u90a3\u4e2a\u7528atomic_flag\u5b9e\u73b0\u7684\u81ea\u65cb\u9501\n\t* noblocking\n\t* not lock-free\n\t* \u53ea\u80fd\u88ab\u4e00\u4e2a\u7ebf\u7a0b\u7528\u3002\u591a\u4e2a\u7ebf\u7a0b\u7528\u5c31\u53ef\u80fddata race \n\n#### 7.1.2 lock-free data stuctures\n###### require:\n* more than one thread must be able to access the data structure concurrently\n* if one of the threads accessing the data stucture is suspended by the scheduler midway throught its operation, the other threads must still be able to complete their operations without waiting for the suspended thread\n###### eg\n* lock-free\u961f\u5217\u53ef\u540c\u65f6push()\u3001pop(), but break if two threads try to push new items at the same time.\n###### other\n* algorithms that use compare\/exchange operations on the data structure often have loops in them\n\t* the reason for using a compare\/exchange operation is that another thread might have modified the data in the meantime, in which case the code will need redo part of its operation before trying the compare\/exchange again.\n\n\n#### 7.1.3 wait-free data stuctures\n* a wait-free data structure is a lock-free data stucture and \u3010can complete its operation within a bounded number of steps, regardless of the behavior of other threads\u3011\n\t* can not have live lock \n\n#### 7.1.4 the pros and cons of lock-free data structures\n* must use atomic operations for the modifications\n* *live lock* occurs when two threads each try to change the data stucture, but for each thread the changes make by the other require  the operation to be restarted.\n\t* sap performance rather than cause long-term problems \n\n###### downside:\n* may well decrease overall performance \n\t* atomic operations can be mush slower than nonatomic operations. \n\t\t* \u3010lock-free\u6570\u636e\u7ed3\u6784\u91cc\u7684\u539f\u5b50\u64cd\u4f5c\u3011\u53ef\u80fd\u6bd4\u3010\u57fa\u4e8e\u9501\u7684\u6570\u636e\u7ed3\u6784\u91cc\u7684 \u539f\u5b50\u64cd\u4f5c\u3011\u591a\n\t\t* the cache ping-pong associated with multiple threads accessing the same atomic var can be a significant performance drain.\n\n###### \u5176\u4ed6\n* it is important to check the performance aspects between \u3010lock-based data stucture\u3011 and \u3010lock-free data structure\u3011:\n\t* \u6700\u5dee\u7b49\u5f85\u65f6\u95f4\n\t* \u5e73\u5747\u7b49\u5f85\u65f6\u95f4\n\t* \u603b\u6267\u884c\u65f6\u95f4\n\t* \u5176\u4ed6 \n\n### 7.2 Examples of lock-free data structures\n* \u8fd9\u4e9b\u4f8b\u5b50\uff0c\u5148\u76f4\u63a5\u7528\u9ed8\u8ba4\u7684memory_order_seq_cst\uff0c\u63a5\u7740reducing constraints to other memory order.\n* only std::atomic_flag is guaranted not to use locks in the implementation. \u56e0\u6b64\u4e00\u4e9b\u5e73\u53f0\u4e0a\uff0c\u770b\u4f3c\u6ca1\u7528\u9501\u800c\u53ea\u7528\u539f\u5b50\u64cd\u4f5c\u7684\u4ee3\u7801\uff0c\u5176\u5b9e\u6807\u51c6\u5e93\u7684\u5b9e\u73b0\u7528\u4e86\u9501\u3002\n\t* \u5728\u4e0a\u8ff0\u90a3\u4e9b\u5e73\u53f0\uff0c\u76f4\u63a5\u7528\u9501\u6216\u8bb8\u66f4\u5408\u9002\u3002 \n\n#### 7.2.1 thread-safe stack without locks\n* push()\u548cpop()\u90fd\u7528\u5230\n\n```\nwhile(!x.compare_exchange_weak(y, z);\n\/*\nx = y \u662finvariant\u3002\u8fd9\u79cd\u5173\u7cfb\u5e94\u8be5\u4e00\u76f4\u4fdd\u6301\uff0c\u4e0d\u5e94\u8be5\u5728\u4e2d\u95f4\u88ab\u5176\u4ed6\u7ebf\u7a0b\u7834\u574f\u3002\n*\/\n\/\/ push():\n\/\/ \u5728\u628anew_node\u8d4b\u503c\u7ed9head\u8fc7\u7a0b\u4e2d\uff0chead\u4e0d\u80fd{\u88ab\u3010\u5176\u4ed6\u7ebf\u7a0b\u6539\u3011\u5bfc\u81f4\u3010invariant:new_node->next = head\u3011\u88ab\u7834\u574f}\u3002\n\/\/ \u5728\u4ee4head=new_node\u7684\u8fc7\u7a0b\u79cd\uff0chead\u9700\u8981\u4fdd\u6301\u3010head = new_node->next\u3011\nwhile(!head.compare_exchange(new_node->next, new_node);\n\n\/\/ pop();\n\/\/ \u5728\u4ee4head = old_head-next, head\u9700\u8981\u4fdd\u6301\u3010old_head = head\u3011\nwhile(!head.compare_exchange(old_head, old_head->next);\n```\n\n* \u8fd9\u4e2alock-free\u4f46\u4e0d\u662fwait-free\n\n#### 7.2.2 stopping those pesky leaks: managing memory in lock-free data stuctures\n* \u9700\u8981\u5199\u4e2a\u5783\u573e\u56de\u6536\u5668\u6765\u56de\u6536\u88ab\u5220\u6389\u7684\u8282\u70b9\u3002\n\t* \u56e0\u4e3a\u8fd4\u56de\u7684\u662f\u3010shared_ptr&amp;lt;T&amp;gt;,\u662fdata\uff0c\u800c\u4e0d\u662fnode\u3011 \uff0c\u800cpush\u65f6node\u662fnew\u51fa\u6765\u7684\u3002\n* \u5f53\u6ca1\u7ebf\u7a0bpop()\u65f6\uff0c\u5c31\u771f\u6b63\u9500\u6bc1\u3010\u5e94\u8be5\u88ab\u9500\u6bc1\u7684\u8282\u70b9\u3011\n\t* \u7528atomic counter\u7edf\u8ba1\u3010\u6b63\u5728pop()\u7684\u7ebf\u7a0b\u6570\u3011 \n\n#### 7.2.3 detecting nodes that can not be reclaimed using hazard pointers\n* \u5728\u6301\u7eed\u9ad8\u5e76\u53d1\u573a\u666f\u4e0b\uff0c7.2.2\u7684\u5783\u573e\u56de\u6536\u5668\u53ef\u80fd\u65e0\u6548\n* the key is to identify when no more threads are accessing a particular node. By far the easiest such mechanism to reason about is the use of hazard pointers.\n\n###### \u4f7f\u7528harzard pointer\u5927\u6cd5\n* using this relies on the fact: it is safe to\u3010 use the value of a pointer adter the object it references have been deleted\u3011^1.\n\t* \u4f46\u3010new\u3001delete\u7684\u9ed8\u8ba4\u5b9e\u73b0\u3011\u4f7f1\u662fundefined behavior\u3002\u56e0\u6b64\u9700\u8981\u3010\u81ea\u5df1\u91cd\u65b0\u5b9e\u73b0new\u3001delete\u3011\/\u3010\u68c0\u67e5\u9ed8\u8ba4\u5b9e\u73b0\u662f\u5426\u53ef\u4ee5\u3011\n\n###### \u5176\u4ed6\n* atomic operations are of ten 100 times slower than an equivalent nonatomic operation on desktiop CPUs.\n\n#### 7.2.4 detecting nodes in use with reference counting \n* \u8fd9\u8282\u4e2d\uff0c\u4f5c\u8005\u505a\u4e86\u4ee5\u4e0b\u64cd\u4f5c\uff1a\n\t* 1\u76f4\u63a5\u7528shared_ptr&amp;lt;T&amp;gt;\u6765\u7ba1\u7406node\n\t* 2\u5acc\u5f031\u4e2d\u7528\u5230\u7684shared_ptr\u53ef\u80fd\u4e0dlock-free,\u5c31\u81ea\u5df1\u7528\u539f\u5b50\u64cd\u4f5ccount\u6765\u6a21\u62dfshared_ptr\u7684\u673a\u5236\u3002\u63a5\u7740\u8bf42\u4e2d\u7528\u5230\u7684atomic\u7684\u64cd\u4f5c\u53ef\u80fd\u4e5f\u4e0d\u662flock-free\u7684\n\n#### 7.2.5 applying the memory model to the lock-free stack\n* \u4e4b\u524d\u7684\u539f\u5b50\u64cd\u4f5c\u90fd\u662f\u7528\u7684\u9ed8\u8ba4\u7684memory_order, \u8fd9\u8282\u4e2d\u5f04\u4e00\u4e9b\u66f4relax\u7684\u539f\u5b50\u64cd\u4f5c\n\n\n#### 7.2.6 writing a thread-safe queue without locks\n\n### 7.3 Guidelines for writing lock-free data structures\n#### 7.3.1 use std::memory_order_seq_cst for prototyping\n* \u5148\u7528memory_order_seq_cst\u5199\u597d\u3002\u518d\u770b\u770b\u6709\u54ea\u4e9b\u5730\u65b9\u53ef\u4ee5\u901a\u8fc7\u66f4relax\u7684memory order\u6765\u4f18\u5316\u3002\n\n#### 7.3.2 use a lock-free memory reclamation scheme\n\n#### 7.3.3 watch out for the ABA problem\n#### 7.3.4 identify busy-wait loops and help the other thread\n\n### 7.4 summary\n\n## chap8 designing concurrent code\n* covers:\n\t* techniques for dividing data between threads\n\t* factors that affect the performance of concurrent code\n\t* how performance factors affect the design of data structures\n\t* exception safely in multithreaded code\n\t* scalability\n\t* example implementations of several parallel algorithms\n\n### 8.1 Techniques for dividing work between threads\n#### 8.1.1 dividing data between threads before processing begins\n#### 8.1.2 dividing data recursively\n#### 8.1.3 dividing work by type\n##### 8.1.3.1 dividing work by task type to separate concerns\n* \u82e5\u4e00\u4e9b\u7ebf\u7a0b\u95f4\u901a\u8baf\u592a\u591a\uff0c\u90a3\u4e48\u53ef\u80fd\u4e0d\u5982\u7528\u4e00\u4e2a\u7ebf\u7a0b\n\n##### 8.1.3.2 dividing a sequence of tasks between threads\n* \u7c7b\u4f3c\u6d41\u6c34\u7ebf\u3002\u82e5\u9700\u8981\u5bf9\u6570\u636e\u4eec\u505a\u4e00\u7cfb\u5217\u64cd\u4f5c\uff0c\u5219\u4e00\u4e2a\u7ebf\u7a0b\u4e00\u6b65\uff0c\u63a5\u7740\u628a\u5904\u7406\u8fc7\u7684\u6570\u636e\u653e\u5165\u961f\u5217\u7ed9\u4e0b\u4e00\u4e2a\u7ebf\u7a0b\u3002\n\t* \u9002\u7528\u4e8e\u8f93\u5165\u662f\u6d41\uff0c\u800c\u4e0d\u662f\u6240\u6709\u6570\u636e\u7684\u60c5\u51b5\u3002\n\n* \u53ef\u4ee5\u8ba9\u5904\u7406\u66f4\u6d41\u7545\u3002\n\t* \u5982\u89e3\u7801\u89c6\u9891\uff0c\u82e5\u4e00\u4e2a\u7ebf\u7a0b\u3010\u5b8c\u6574\u7684\u89e3\u538b\u6574\u4e2a\u5e27\u3011 \uff0c\u5219\u53ef\u80fd\u67d0\u4e00\u79d2\uff0c\u6240\u6709\u7ebf\u7a0b\u90fd\u4e00\u5e27\u90fd\u6ca1\u89e3\u538b\u51fa\u6765\uff0c\u63a5\u7740\u4e0b\u4e00\u79d2\u6240\u6709\u90fd\u89e3\u538b\u51fa\u6765\u3002\u4f46\u82e5\u6d41\u6c34\u7ebf\u8fd9\u79cd\uff0c\u5219\u53ef\u4ee5\u66f4\u52a0\u6d41\u7545\u3002\n\n### 8.2 Factors affecting the performance of concurrent code\n#### 8.2.1 How many processors\n* \u901a\u8fc7std::thread::hardware_concurrency()\u8fd4\u56de\u6838\u5fc3\u6570\u3002\n\t* \u4f46\u53ef\u80fdos\u4e5f\u8fd0\u884c\u4e86\u5176\u4ed6io\u5bc6\u96c6\u578b\u7684\u5e94\u7528\u3002\n\t* \u6709\u4e9basync()\u7684\u5b9e\u73b0\u4f1a\u6839\u636eos\u72b6\u51b5\u66f4\u597d\u7684\u51b3\u5b9a\u662f\u5426\u5e76\u53d1\u6267\u884c\n\t* \u67e5\u770bos\u4e86\u89e3\u6709\u6ca1\u6709\u673a\u5236\u5e2e\u52a9\u5e94\u7528\u9009\u62e9\u5408\u9002\u7684\u5e76\u53d1\u6570 \n\n#### 8.2.2 data contention and cache ping-pong\n* as the number of processors increases, so does the likelihood and performance impact of another problem: that of multiple processors trying to access the same data.\n* *high contention*, *low contention*\n* cache ping-pong:\n\t* the data will be passed back and forth between the caches many times\n\t* if a processor stalls beacuse it has to wait for a cache transfer, it can not do any work in the meantime,  even if there are other threads waiting that could do useful work.\n* mutex\u5728\u4e0d\u540c\u7ebf\u7a0b\u95f4\u88ab\u83b7\u53d6\u3001\u9501\u4f4f\u3001\u89e3\u9501\uff0c\u8fd9\u5c31\u9020\u6210\u4e86cache ping-pong\n* \u8d8a\u591a\u4e2a\u7684\u7ebf\u7a0b\u95f4\u6709sharing data\u3001mutex\uff0chigh contention \u5c31\u8d8a\u6709\u53ef\u80fd   \n\n#### 8.2.3 False sharing\n* cache line\n\t* blocks of memory, typically 32 or 64 bytes \n\t* \u5728\u591a\u4e2a\u6838\u4e4b\u95f4\u5171\u4eab\n\t* \u5f53\u6838\u4e0a\u7684\u7ebf\u7a0b\u9700\u8981\u4fee\u6539cache line 1\u7684\u6570\u636e\u65f6\uff0ccache line 1 \u7684\u6240\u6709\u6743\u5c31\u8f6c\u5230\u90a3\u4e2a\u6838\u3002\u8fd9\u5c31\u4ea7\u751f\u4e86cache ping-pong\n* false sharing: the cache line is shared, but none of the data is.  \n\n#### 8.2.4 How close is your data\n* \u82e5\u4e00\u4e2a\u7ebf\u7a0b\u8bbf\u95ee\u7684\u6570\u636e\u5728\u5185\u5b58\u4e2d\u4f4d\u7f6e\u5dee\u8ddd\u5927\uff0c\u5219\u6570\u636e\u5206\u6563\u5728\u591a\u4e2acache line\u4e2d\uff0c\u4e0d\u5982\u5728\u66f4\u5c11\u7684cache line\u65f6\u6027\u80fd\u597d\n\n#### 8.2.5 oversubscription and excessive task switching\n* oversubscription: \u5f00\u7684\u7ebf\u7a0b\u592a\u591a\u4e86\uff0c\u5bfc\u81f4\u53cd\u800c\u603b\u4f53\u6027\u80fd\u4e0b\u964d\n*  Having the extra threads enables the application perform useful work rather than having processors sitting idle while the threads wait.\t\n\n### 8.3 Designing data structures for multithreaded performance\n#### 8.3.1 diving array elements for complex operations\n#### 8.3.2 data access patterns in other data structures\n### 8.4 Additional considerations when designing for concurrency\n### 8.5 Designing concurrent code in practice\n### 8.6 summary\n\n## chap9 advanced thread management\n* covers:\n\t* thread pools\n\t* handling dependencies between pool tasks\n\t* work stealing for pool threads\n\t* interrupting threads \n  \n  ### 9.1.1 the simplest possible thread pool\n  * \u6709\u4efb\u52a1\u961f\u5217\n  * \u56fa\u5b9a\u6570\u91cf\u7684\u7ebf\u7a0b\n  * \u82e5\u63d0\u4ea4\u4efb\u52a1\u7684\u9700\u8981\u7b49\u5f85\u4efb\u52a1\u5b8c\u6210\uff0c\u5219\u9700\u8981\u81ea\u884c\u540c\u6b65\u3002\n\n```\nclass thread_pool {\n    std::atomic_bool done;\n    thread_safe_queue&lt;std::function&amp;lt;void()&amp;gt;&amp;gt; work_queue;\n    std::vector&amp;lt;std::thread&amp;gt; threads;\n    join_threads joiner;\n    void worker_thread() {\n        while (!done) {\n            std::function&amp;lt;void()&amp;gt; task;\n            if (work_queue.try_pop(task)) {\n                task();\n            } else {\n                std::this_thread::yield();\n            }\n        }\n    }\npublic:\n    thread_pool () : done(false), joiner(threads) {\n        unsigned const thread_count = std::thread::hardware_concurrency();\n        try {\n            for (unsigned i=0; i&amp;lt;thread_count; ++i) {\n                threads.push_back(\n                        std::thread(&amp;thread_pool::worker_thread, this));\n            }\n        } catch (...) {\n            done = true;\n            throw;\n        }\n    }\n    ~thread_pool() {\n        done = true;\n    }\n    template&amp;lt;typename FunctionType&amp;gt;\n    void submit(FunctionType f) {\n        work_queue.push(std::function&amp;lt;void()&amp;gt;(f));\n    }\n};\n```\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n## \u9644\u5f55A\n### A1 rvalue reference\n* \u4ec5\u53ef\u7528\u4e8ervalue,\n\n```\n\/\/ \u58f0\u660e\nint &amp;&amp; i = 42;\n\/\/ i\u662frvalue reference,\u662flvalue,42\u662frvalue\nf1(int &amp;&amp; p) { p = 1; }\nf1(move(i)); \/\/ => i\u53d8\u62101\n\/\/ f1(i)\u4f1a\u62a5\u9519\uff0c\u56e0\u4e3ai\u662frvalue reference,\u662flvalue\n```\n\n#### A1.1 move\n* \u7528\u4e8e \"steal\" rvalue\u3002\u88absteal\u7684\u53ef\u80fd\u53d8\u6210\u9ed8\u8ba4\u6784\u9020\u5668\u9020\u51fa\u6765\u7684\u90a3\u79cd\n\n```\n\/\/ \u8fd9\u91ccvec\u662f rvalue reference.\n\/\/ \u4f20\u5165\u53c2\u6570\u672c\u8eab\u662frvalue\u65f6\uff0c\u5c31\u4e0d\u4f1acopy\nvoid f(vector&lt;int> &amp;&amp;vec);\n\n```\n\n* \u5f3a\u5236steal\n\t* std::static_cast&lt;T&amp;&amp;>\n\t* std::move()\n\n```\nX x1;\nX x2 = std::move(x1);\nX x3 = std::static_cast&lt;X&amp;&amp;>(x2);\n```\n\n* unique_ptr\u4e0d\u53ef\u590d\u5236\uff08\u4e0d\u7136\u8dd1\u8d77\u6765\u51fa\u9519\uff09\uff0c\u9700\u7528move\n\n```\nunique_ptr&lt;int> p = make_unique&lt;int>(1);\n\/\/ \u8dd1\u8d77\u6765\u4f1a\u51fa\u9519\uff1a\n\/\/ x vs.push_back(p); x\n\/\/ \u6b63\u89e3\uff1a\n\/\/ vs.push_back(move(p));\n\/\/ move\u4e4b\u540ep.get()=0,\u4e0d\u53ef*p\n\n\/\/ \nshared_ptr&lt;int> p1 = make_shared&lt;int>(1);\n\/\/ use_count ++ :\nshared_ptr&lt;int> p2 = p1;  \nshared_ptr&lt;int> p2 (p1);\nf1(p1); \/\/ f1(shared_ptr&lt;int>)\nvs.push_back(p2) \/\/ shared_ptr\u53ef\u590d\u5236\n\n\/\/ use_count\u4e0d\u53d8 \uff1a\nshared_ptr&lt;int> p2 = move(p1); \/\/ => p1.get() = 0;\n\n\/\/ \u672a\u521d\u59cb\u5316\u7684shared_ptr p:\n\/\/ use_count = 0; \u7b49\u4e8enullptr; p.get()\u7b49\u4e8e0\n\n\/\/ \u4e3a\u521d\u59cb\u5316\u7684unique_ptr p;\n\/\/ \u7b49\u4e8enullptr; p.get()\u7b49\u4e8e0\n```\n\n* thread\u3001unique_lock\u3001future&lt;>\u3001promise&lt;>\u3001packaged_task&lt;> \u90fd\u4e0d\u53ef\u590d\u5236\uff0c\u4f46\u90fd\u53efmove\n\n#### A1.2 rvalue references and function templates\n\n### A2 deleted functions\n* \u8ba9\u7c7b\u4e0d\u88abcopy\u7684\u65b9\u6cd5\n\n```\n1. hack:\n\u8ba9copy constructor\u3001copy assignment operator\n\u53d8\u6210private\u4e14\u4e0d\u5b9e\u73b0\n\u8fd9\u6837\u522b\u4eba\u590d\u5236\u65f6\uff0c\u5c31\u4f1a ce, \u56e0\u4e3a \u6ca1\u5b9e\u73b0 \n\nA(A const &amp;) = delete;\nA&amp; operator= (A const &amp;) = delete;\n```\n\n* \u8ba9\u7c7b\u4e0d\u80fd\u88abcopy\uff0c\u8ba9\u5176\u80fd\u88abmove\n\n```\nA(A&amp;&amp; _a) : a(move(_a)) {}\nA&amp; operator=(A&amp;&amp; _a) {\n    a = move(a);\n    return *this;\n}\n```\n\n### A3 defaulted functions\n* \u58f0\u660e\u51fd\u6570\uff0c\u5f53\u7f16\u8bd1\u5668\u53ef\u4ee5\u81ea\u52a8\u751f\u6210\u67d0\u51fd\u6570\u7684\u5b9e\u73b0\u65f6\uff0c\u8ba9\u7f16\u8bd1\u5668\u5e2e\u4f60\u5199\u3002\n\t* \u6ca1\u58f0\u660e\u51fd\u6570\uff0c\u800c\u7f16\u8bd1\u5668\u81ea\u52a8\u751f\u6210\u6b64\u51fd\u6570\u7684 \u58f0\u660e\u548c\u5b9e\u73b0\u65f6\uff0c\u7f16\u8bd1\u5668\u4f1a\u8ba9\u5176public\u3002\u901a\u8fc7\u8fd9\u6ce2\u64cd\u4f5c\uff0c\u53ef\u4ee5\u8ba9protected\/private\n\t* \u6ca1\u58f0\u660e\u51fd\u6570\uff0c\u800c\u7f16\u8bd1\u5668\u4f1a\u81ea\u52a8\u751f\u6210\u6b64\u51fd\u6570\u7684 \u58f0\u660e\u548c\u5b9e\u73b0\u65f6\u3002\u901a\u8fc7\u8fd9\u6ce2\u64cd\u4f5c\uff0c\u8bfb\u4ee3\u7801\u7684\u4eba\u77e5\u9053\u6709\u8fd9\u4e2a\u51fd\u6570\u3002 \n\t* \u6ca1\u58f0\u660e\u51fd\u6570\uff0c\u800c\u7f16\u8bd1\u5668\u4e0d\u4f1a\u81ea\u52a8\u751f\u6210\u6b64\u51fd\u6570\u7684 \u58f0\u660e\u548c\u5b9e\u73b0\u65f6\u3002\u5982 \u9ed8\u8ba4\u6784\u9020\u5668=defaulted,\n\n```\nclass A {\n    private:\n        \/\/ change access\n        A() = default;\n    public:\n        \/\/ take a noe-const reference\n        A(A&amp;) = default;\n        \/\/ declare as defaulted for documentation\n        A&amp; operator=(const A&amp;) = default;\n    protected:\n        \/\/ change access and add virtual\n        virtual ~A() = default;\n}\n```\n\n* \u4e00\u4e9b\u51fd\u6570\u4e0d\u4e3adefault,\u5219\u8fd9\u4e9b\u51fd\u6570\u5fc5\u5b9a\u4e0dtrivial\u3002trivial\u7684\u4e00\u4e9b\u597d\u5904\n\t* Object\u82e5\u6709default\u7684 trival copy constructor\uff0c trival copy assignment operator, trival destructor,\u53ef\u4ee5memcpy\u3001memmove\n\t* Classes with trival copy assignment \u53ef\u7528\u4e8eatomic&lt;>,\u4ee5\u4fbf \u539f\u5b50\u64cd\u4f5c\n\n* constructor\u90fd\u4e0d\u662f \u81ea\u5df1\u5199\u7684\uff0c\u5219\u8fd9\u4e2aclass can be initialized with an aggregate initializer\n\n```\nstruct A {\n    A() = default;\n    A(A const &amp;) = default;\n    int a; \n    double b;\n};\nA x = {1,2.11};\n```\n\n#### A.4 constexpr functions\n##### A.4.1 consexpr and user-defined types\n* \u662fliteral type\u7684class\uff0c\u9700\u8981\u6ee1\u8db3\u7684\u6761\u4ef6\uff1a\n\t* \u62e5\u6709trivial copy constructor, \u62e5\u6709 trivial destructor\n\t* all non-static data members and base classes must be trivial types\n\t* must have either a trivial default constructor or a constexpr constructor other than the copy constructor.\n\n* constexpr\u51fd\u6570 \u53ea\u80fd call\u5176\u4ed6 consexpr \u51fd\u6570\n* constexpr\u51fd\u6570 \u53ea\u80fd\u7528\u4e8econst expression\n\n##### A.4.2 constexpr objects\n\n##### A.4.3 constexpr function requiremetns\n* \u9664\u5f00\u6784\u9020\u51fd\u6570\u7684 \u8981\u6c42\uff1a\n\t* \u53c2\u6570 \u90fd\u662f literal type\n\t* \u8fd4\u56de\u7c7b\u578b \u662f literal type\n\t* \u51fd\u6570\u9898 \u53ea\u6709\u4e00\u4e2a return \n\t* return\u8bed\u53e5\u662fconstant expression\n\t* \u7528\u4e8e\u6784\u9020 \u8fd4\u56de\u503c \u7684 \u4efb\u4f55 \u6784\u9020\u5668\u3001\u8f6c\u6362\u64cd\u4f5c\u7b26 \u90fd\u9700\u662fconstexpr\n\n* constexpr\u7684 \u6210\u5458\u51fd\u6570 \u7684\u989d\u5916\u8981\u6c42\uff1a\n\t* \u4e0d\u80fd\u4e3avirtual\n\t* \u6240\u5c5e\u7c7b \u9700\u662f literal type \n\n* constexpr\u7684\u6784\u9020\u51fd\u6570 \u7684\u8981\u6c42\uff1a\n\t* \u6784\u9020\u51fd\u6570\u4f53 \u4e3a\u7a7a\n\t* \u57fa\u7c7b \u9700\u521d\u59cb\u5316\n\t* non-static\u6210\u5458 \u9700\u521d\u59cb\u5316\n\t* any constructor used in the member initialization list must qualify as constant expressions.\n\n* trivial copy constructors are implicityly constexpr\n\n##### A.4.4 constexpr and templates\n\n#### A.5 Lambda functions\n* \u6700\u7b80\u5f62\u5f0f\n\n```\n\/\/ \u65e0\u53c2\u6570\uff0c\u65e0\u8fd4\u56de\u503c\uff0c\u4ec5\u4f7f\u7528\u5168\u5c40\u53d8\u91cf\u3001\u51fd\u6570\n&#91;] {\n    do_stuff();\n    do_more_tuff();\n}();\n```\n\n* \u4f8b\u5b50\n\n```\nvector&lt;int> data = make_data();\nfor_each(data.begin(),data,end(), &#91;](int i) { cout&lt;&lt;i&lt;&lt;endl; });\n```\n\n* for_each(Iterator start, Iterator end, arg3)\n\t* arg3\u662f callable\u7684\uff08\u51fd\u6570\/\u5b9e\u73b0()\u7684\u7c7b\uff09\n\t* arg3\u4ee5 \u904d\u5386\u7684\u5143\u7d20 \u4f5c\u4e3a\u8f93\u5165\n\n```\nvector&lt;int> vs;\nvoid f(int i) {\n    cout&lt;&lt;i&lt;&lt;endl;\n}\nclass A {\n    void operator()(int i) {}\n}\nfor_each(vs.begin(),vs.end(), f);\n\nA a;\nfor_each(vs.begin(),vs.end(), a);\n```\n\n* lambda \u8fd4\u56de\u503c\u7c7b\u578b\n\t* \u53ef\u7531lambda\u8868\u8fbe\u5f0f\u4e2d\u7684 \u5355\u4e00return\u8bed\u53e5\u63a8\u65ad\u3002\n\t* \u663e\u793a\u6307\u5b9a\u3002&#91;](arg...)-> \u8fd4\u56de\u503c\u7c7b\u578b {}\n\n```\ncond.wait(lk,&#91;]()->bool{return myb;});\n```\n\n##### A.5.1 lambda functions that reference local variablers\n\n* &#91;=] \/\/ \u6355\u83b7\u5916\u90e8 \u6240\u6709\u5c40\u90e8\u53d8\u91cf \u7684 copy\n* &#91;&amp;] \/\/ \u6355\u83b7\u5916\u90e8 \u6240\u6709\u5c40\u90e8\u53d8\u91cf \u7684 reference\n* \u6355\u83b7\u5168\u90e8\uff0c\u9ed8\u8ba4copy,\u4e00\u4e9breference\n\n```\n\/\/ j\u3001k\u4e3areference\nint i=1,j=2,k=3;\n&#91;=, &amp;j, &amp;k]{return i+j+k;}();\n```\n\n   * \u6355\u83b7\u5168\u90e8\uff0c\u9ed8\u8ba4reference,\u4e00\u4e9bcopy\n\n```\n&#91;&amp;,j,k]{return i+j+k;}();\n```\n\n* \u6355\u83b7\u90e8\u5206\n\n```\n&#91;&amp;i, j, &amp;k]{return i+j+k;}();\n\/\/ \u6210\u5458\u51fd\u6570\u4e2d\u7684lambda\nclass A {\n    int x;\n    void foo(vector&lt;int>&amp; vs) {\n        for_each(vs.begin(),vs.end(), &#91;this](int &amp; i){\n        i += x;\n        };\n    }\n}\n```\n\n* concurrency\u4e2d\u7684\u4f7f\u7528\u573a\u666f\n\t* std::condition_variable::wait()\n\t* std::packaged_task&lt;>\n\t* thread pools for packagin small tasks\n\t* std::thread constructor\n\t* as the function when using parallel algorithms \n\t\t* parallel_for_each() \n\n#### A.6 \u53ef\u53d8\u53c2\u6570\u6a21\u677f\n#### A.7 \u81ea\u52a8\u5224\u65ad\u53d8\u91cf\u7c7b\u578b\n* if a variable is initialized in its declaration from a value of the same type, then i can specify the type as auto\n* \u77e5\u8bc6\n\t* array types decay to pointers, references(\u5f15\u7528) are dropped unless the type expression explicityly declares the variable as a reference \n\n```\nint &amp; r = x;\nauto x = r; \/\/ int\nauto&amp; y = r; \/\/ int &amp;\n```\n\n#### A.8 Thread-local variables\n<\/code><\/pre>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[36,79],"tags":[],"class_list":["post-1575","post","type-post","status-publish","format-standard","hentry","category-36","category-79"],"_links":{"self":[{"href":"https:\/\/bloo.heing.fun\/index.php?rest_route=\/wp\/v2\/posts\/1575","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/bloo.heing.fun\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/bloo.heing.fun\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/bloo.heing.fun\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/bloo.heing.fun\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1575"}],"version-history":[{"count":2,"href":"https:\/\/bloo.heing.fun\/index.php?rest_route=\/wp\/v2\/posts\/1575\/revisions"}],"predecessor-version":[{"id":1605,"href":"https:\/\/bloo.heing.fun\/index.php?rest_route=\/wp\/v2\/posts\/1575\/revisions\/1605"}],"wp:attachment":[{"href":"https:\/\/bloo.heing.fun\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1575"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/bloo.heing.fun\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1575"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/bloo.heing.fun\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1575"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}