PIXNET Logo登入

websitedesign

跳到主文

website design小天地

部落格全站分類:不設分類

  • 相簿
  • 部落格
  • 留言
  • 名片
  • 8月 14 週二 201810:01
  • Datacenter migration, how to reduce the risk of data center

Datacenter migration
Datacenter migration, before considering the complexity of data center design, it is necessary to consider the use of a flexible system without single point of failure (SPOF). By definition, a single point of failure (SPOF) is a component that, once the system fails, makes the entire system inoperable. In other words, a single point of failure produces an overall failure. . These may be component failures or incorrect human intervention, such as switching without knowing how the system reacts.
2N redundant system can be regarded as a minimum requirement for SPOF installation. For simplicity, it is assumed that the 2N system of the data center consists of two identical electrical and mechanical systems, A and B. Fault tree analysis (FTA) will highlight the combination of events that cause failure. However, it is very difficult to simulate human errors in fault tree analysis (FTA). The data used to simulate human errors will always be subjective, and there are many variables.
If the system in this 2N redundant system example is physically separate, any operation on one system should have no effect on the other. However, the introduction of enhancements is not uncommon. It uses a simple 2N redundant system and adds other components, such as disaster recovery links and public storage containers connecting the two systems.
In large-scale design, this becomes an automatic control system (such as SCADA, BMS), rather than a simple mechanical interlock. The basic principles of 2N redundant system have been destroyed, and the complexity of the system has increased exponentially. The same is true of the skills required by the operational team.
A review of the design still shows that 2N redundant design has been achieved, but the resulting complexity and operational challenges undermine the basic requirements of high availability design.
Studies have shown that a particular sequence of events that lead to failure is usually unpredictable and will not know what the consequences will be until it happens. In other words, the sequence of events is unknown before people know. Therefore, it will not become part of fault tree analysis (FTA).
Austrian physicist Ludwig Von Boltzmann has developed an entropy equation that has been applied to statistics, especially for missing information. In this theory, a box grid, such as a 4 x 2 or 5 x 4 grid, and a coin in the box are set. The theory allows users to determine the number of problems to determine which box to place coins on the defined grid. If you replace boxes with system components and coins with unknown failure events, one can consider how the system availability is affected by complexity. It can be seen that the number of unknown failure events that occur less frequently can reduce the number of failures that the system can fail. Therefore, increasing people's detailed knowledge of the system and discovering unknown events reduces the combination of system failures, thereby reducing the risk.
human factor
Research shows that any system with human-machine interface will eventually fail due to loopholes. Vulnerabilities are any possible vulnerabilities that may cause failures in data center facilities. Data center vulnerabilities may be related to infrastructure or facility operation. Infrastructure involves equipment and systems, in particular:
Mechanical and electrical reliability.
Facilities design, redundancy and topology.
These actions involve human factors, including human errors at the individual level and management level. It involves:
• operational team adaptability.
Team reaction to vulnerabilities.
The more complex the system, the more vulnerable the human factor is, the more training and learning the facilities need. Learning is applicable not only to individuals, but also to organizations. Organizational learning is characterized by maturity and processes (shown below as cumulative experience), such as around data center structures and resources, maintenance, change management, document management, debugging and operability, and maintainability.
Personal learning is a function of knowledge, experience and attitude (as shown in the chart as the depth of experience). Developing an organizational and personal learning environment helps reduce failure rates and provides operators with expertise that effectively reduces energy waste.
Universal learning curve applied to data center
It is important to understand that zero failure can never be achieved because the relationship between failure and experience follows an exponential curve. Data center facility operators with good knowledge and experience are still prone to complacency and to failure in a series of previously unknown events.
conclusion
By providing a learning environment that improves organizational and personal knowledge, it reduces the risk of data center. Although sophisticated operators have experience in reducing failure rates, too complex designs can still fail if implemented without adequate training.
(繼續閱讀...)
文章標籤

hank 發表在 痞客邦 留言(0) 人氣(0)

  • 個人分類:Data centre
▲top
  • 8月 14 週二 201810:01
  • Datacenter migration, how to reduce the risk of data center

Datacenter migration
Datacenter migration, before considering the complexity of data center design, it is necessary to consider the use of a flexible system without single point of failure (SPOF). By definition, a single point of failure (SPOF) is a component that, once the system fails, makes the entire system inoperable. In other words, a single point of failure produces an overall failure. . These may be component failures or incorrect human intervention, such as switching without knowing how the system reacts.
2N redundant system can be regarded as a minimum requirement for SPOF installation. For simplicity, it is assumed that the 2N system of the data center consists of two identical electrical and mechanical systems, A and B. Fault tree analysis (FTA) will highlight the combination of events that cause failure. However, it is very difficult to simulate human errors in fault tree analysis (FTA). The data used to simulate human errors will always be subjective, and there are many variables.
If the system in this 2N redundant system example is physically separate, any operation on one system should have no effect on the other. However, the introduction of enhancements is not uncommon. It uses a simple 2N redundant system and adds other components, such as disaster recovery links and public storage containers connecting the two systems.
In large-scale design, this becomes an automatic control system (such as SCADA, BMS), rather than a simple mechanical interlock. The basic principles of 2N redundant system have been destroyed, and the complexity of the system has increased exponentially. The same is true of the skills required by the operational team.
A review of the design still shows that 2N redundant design has been achieved, but the resulting complexity and operational challenges undermine the basic requirements of high availability design.
Studies have shown that a particular sequence of events that lead to failure is usually unpredictable and will not know what the consequences will be until it happens. In other words, the sequence of events is unknown before people know. Therefore, it will not become part of fault tree analysis (FTA).
Austrian physicist Ludwig Von Boltzmann has developed an entropy equation that has been applied to statistics, especially for missing information. In this theory, a box grid, such as a 4 x 2 or 5 x 4 grid, and a coin in the box are set. The theory allows users to determine the number of problems to determine which box to place coins on the defined grid. If you replace boxes with system components and coins with unknown failure events, one can consider how the system availability is affected by complexity. It can be seen that the number of unknown failure events that occur less frequently can reduce the number of failures that the system can fail. Therefore, increasing people's detailed knowledge of the system and discovering unknown events reduces the combination of system failures, thereby reducing the risk.
human factor
Research shows that any system with human-machine interface will eventually fail due to loopholes. Vulnerabilities are any possible vulnerabilities that may cause failures in data center facilities. Data center vulnerabilities may be related to infrastructure or facility operation. Infrastructure involves equipment and systems, in particular:
Mechanical and electrical reliability.
Facilities design, redundancy and topology.
These actions involve human factors, including human errors at the individual level and management level. It involves:
• operational team adaptability.
Team reaction to vulnerabilities.
The more complex the system, the more vulnerable the human factor is, the more training and learning the facilities need. Learning is applicable not only to individuals, but also to organizations. Organizational learning is characterized by maturity and processes (shown below as cumulative experience), such as around data center structures and resources, maintenance, change management, document management, debugging and operability, and maintainability.
Personal learning is a function of knowledge, experience and attitude (as shown in the chart as the depth of experience). Developing an organizational and personal learning environment helps reduce failure rates and provides operators with expertise that effectively reduces energy waste.
Universal learning curve applied to data center
It is important to understand that zero failure can never be achieved because the relationship between failure and experience follows an exponential curve. Data center facility operators with good knowledge and experience are still prone to complacency and to failure in a series of previously unknown events.
conclusion
By providing a learning environment that improves organizational and personal knowledge, it reduces the risk of data center. Although sophisticated operators have experience in reducing failure rates, too complex designs can still fail if implemented without adequate training.
(繼續閱讀...)
文章標籤

hank 發表在 痞客邦 留言(0) 人氣(3)

  • 個人分類:Data centre
▲top
  • 8月 11 週六 201809:51
  • 機房建置,企業為什麼要自己建數據中心?

機房建置
機房建置,您是否認爲數據管理是貴公司業務的核心呢?除非您企業所從事的是有關數據管理方面的業務,否則數據管理絕不應該是貴公司的業務的核心。
對於您所在的企業而言,真正的核心業務應該是您企業賴以爲生的業務——即:定義貴公司真正所屬行業是什麼的業務。您企業可以是製造鞋子或機器零件的供應廠商;您企業也可以是通過生產工藝流程將原材料轉化爲相關產品。無論您企業所採用的是哪種營生方式,數據管理都不是您企業業務的核心。正如未來學家傑弗裏·摩爾(Geoffrey Moore)所定義的那樣,數據處理或數據管理僅僅就只是“背景”而已。
您企業爲什麼擁有一處自有數據中心?
在您企業構建自己的數據中心時,很可能很少有其他企業組織擁有自建的數據中心,而您在彼時也自然會覺得打造企業自有的數據中心採用起來會相當的得心應手。而回顧大約兩百年前的歷史,您會發現大多數製造商們都會選擇沿着河流建造工廠,這樣他們就可以安裝自己的由電輪驅動的發電機。兩百年前的企業主們不得不沿着河流建造工廠的真正原因是因爲彼時還沒有市政電力公用服務。
當市場上的諸如亞馬遜,IBM,微軟等等其他一系列大型信息技術(IT)公司最早開始大肆宣傳“雲服務”可以用來幫助企業客戶運營您企業的業務,但這些服務卻只是運行在其他公司的數據中心時,您並不相信。您擔心這類服務的安全性,可靠性及其是否真正具備成本效益。因此,您企業遲遲猶豫不決,而沒有選擇將貴公司的高價值數據資產放在這些雲服務。
因此,您企業選擇了構建自己的數據中心或租用主機託管設施的方案,這樣,您企業還將不得不需要進一步的負責管理和維護進行數據處理的相關硬件和IT基礎架構軟件。您企業需要將所有這一系列都妥善安置到一處擁有大量充足的電力供應、互聯網連接、重型冷卻設備、發電機以及基於電池的電源備份系統和長達數英里長的銅纜和光纖電纜的大型操作運營空間。您企業還將爲其投資各種數字化和物理安全系統以確保安全。然後繼續的加大投資以保持其全部設施的長期正常運營,維護和增長,以便提供足夠的容量來滿足貴公司不斷增長的業務需求。
哪些狀況已經發生了改變?
現如今,市場上的各種公共雲服務已被充分證明是安全可靠的了。我們只需看看大規模的服務供應商即可——包括諸如亞馬遜網絡服務、微軟公司的Azure、谷歌雲平臺、IBM公司的SoftLayer等等。可以想見的是:這些巨頭中的任何一家都會打賭稱他們絕不會提供任何不安全、不可靠的服務。
現如今的互聯網提供了一種高速的、全球範圍內的數據分配系統,該系統可靠,無處不在且極具成本效益。與上個世紀不同,互聯網現在通過全球戰略定位的公共雲服務數據中心連接着世界上幾乎任何地方的任何人。
您企業是什麼時候決定放棄自有數據中心的?
從您企業自己部署的本地內部數據中心遷移到基於公共雲服務的數據中心的最爲智能的方法是逐步評估所要遷移的每項數據和工作負載。這將花費相當一段時間,而在此期間,您企業的運營將成爲一個混合環境。
所以,您企業所採取的第一步是從內部部署環境過渡到混合部署環境。隨着您企業逐步將相關的每項數據工作負載和應用程序遷移到公共雲服務,您企業將繼續成爲混合環境。然而,可能會有這樣的一天,當您將大量數據中心的工作負載遷移到雲服務中時,您企業的業務不再需要這麼大規模的數據中心了。屆時您企業完全可以選擇逐漸減小數據中心的規模,因爲這樣更有意義。
最終,您企業可以從您自己的數據中心消除所有應用程序,所有數據和所有工作負載,並將每款應用程序和相關的數據都遷移到公共雲服務基礎架構——其可能是一家服務提供商,但更有可能是跨多家服務提供商。通過優先考慮每項工作負載和每款應用程序的最佳狀態來創建雲遷移規劃路線圖。遷移完所有應用程序和數據後,您企業就可以放棄自有的數據中心了。
到那時,貴公司已經被使用了多年的數據中心相關設備,已然通過生產性使用和財務折舊收回了成本。無需再續訂購買相關的軟件許可證授權了。您數據中心的所有投資都獲得了豐厚的回報,現在您只需支付可預測的,可預算的,並且是顯著降低成本的運營費用即可,從而節省了大量不可思議的成本。您企業可以在不發生業務中斷的情況下全部遷移到雲服務,而不會存在任何數據丟失的風險,同時您企業對於所有這些數據也是完全可以控制的,因爲您企業已經花費了相當長的一段時間將其從內部部署環境遷移過渡到混合雲部署。
您企業是否急於擺脫數據中心業務呢?是否試圖急於找到如何資助這一舉措?我們建議您企業可以遵循許多其他企業的成功經驗:不再更新舊的,過時的存儲設備上的維護合同或再次支付舊服務器更新的費用,而是使用這筆預算資金來資助您企業的雲遷移項目。
這也是大多數企業客戶處理雲轉型的方式——使用原本用於硬件和軟件更新的資金預算用於雲服務遷移項目,並放棄自有的數據中心。通常首先放棄災難恢復(DR)數據中心,並將備份數據到雲服務中,然後將應用程序及其數據遷移到雲中,並關閉其餘數據中心。
過去幾年中,成千上萬的企業客戶已經通過上述過程實現了過渡發展。有些企業現在已經完全採用公共雲服務來託管其數據和業務應用程序了。更多的企業則是關閉了他們的災難恢復數據中心,並希望在未來幾年減少或關閉其餘的數據中心。大多數企業客戶處於混合雲採用的各個階段,並且剛剛開始將關鍵業務工作負載遷移到雲服務中。他們通過充分利用雲服務中所提供的機器學習、數據分析和其他高級IT服務,進而得以專注於其現代化的核心業務。
最終,大多數企業客戶將放棄自有的數據中心,完全退出數據中心硬件和IT基礎架構軟件的維護業務,因爲這根本不是這些企業業務的核心。
今天,互聯網提供了可靠,安全的數據分發系統,以及增強了專用網絡所需的性能。與此同時,廣大的企業客戶將能夠按照自己的發展進度繼續其雲服務的轉型之旅,並保持混合模式,直到有一天您企業已經不再有公司自有或租用的主機託管數據中心,此時您就得以利用IT作爲優勢來爲您企業的業務提供運營環境了,進而使您能夠專注於您最擅長的領域——您企業真正的核心業務。
(繼續閱讀...)
文章標籤

hank 發表在 痞客邦 留言(0) 人氣(0)

  • 個人分類:Data centre
▲top
  • 8月 11 週六 201809:51
  • 機房建置,企業為什麼要自己建數據中心?

機房建置
機房建置,您是否認爲數據管理是貴公司業務的核心呢?除非您企業所從事的是有關數據管理方面的業務,否則數據管理絕不應該是貴公司的業務的核心。
對於您所在的企業而言,真正的核心業務應該是您企業賴以爲生的業務——即:定義貴公司真正所屬行業是什麼的業務。您企業可以是製造鞋子或機器零件的供應廠商;您企業也可以是通過生產工藝流程將原材料轉化爲相關產品。無論您企業所採用的是哪種營生方式,數據管理都不是您企業業務的核心。正如未來學家傑弗裏·摩爾(Geoffrey Moore)所定義的那樣,數據處理或數據管理僅僅就只是“背景”而已。
您企業爲什麼擁有一處自有數據中心?
在您企業構建自己的數據中心時,很可能很少有其他企業組織擁有自建的數據中心,而您在彼時也自然會覺得打造企業自有的數據中心採用起來會相當的得心應手。而回顧大約兩百年前的歷史,您會發現大多數製造商們都會選擇沿着河流建造工廠,這樣他們就可以安裝自己的由電輪驅動的發電機。兩百年前的企業主們不得不沿着河流建造工廠的真正原因是因爲彼時還沒有市政電力公用服務。
當市場上的諸如亞馬遜,IBM,微軟等等其他一系列大型信息技術(IT)公司最早開始大肆宣傳“雲服務”可以用來幫助企業客戶運營您企業的業務,但這些服務卻只是運行在其他公司的數據中心時,您並不相信。您擔心這類服務的安全性,可靠性及其是否真正具備成本效益。因此,您企業遲遲猶豫不決,而沒有選擇將貴公司的高價值數據資產放在這些雲服務。
因此,您企業選擇了構建自己的數據中心或租用主機託管設施的方案,這樣,您企業還將不得不需要進一步的負責管理和維護進行數據處理的相關硬件和IT基礎架構軟件。您企業需要將所有這一系列都妥善安置到一處擁有大量充足的電力供應、互聯網連接、重型冷卻設備、發電機以及基於電池的電源備份系統和長達數英里長的銅纜和光纖電纜的大型操作運營空間。您企業還將爲其投資各種數字化和物理安全系統以確保安全。然後繼續的加大投資以保持其全部設施的長期正常運營,維護和增長,以便提供足夠的容量來滿足貴公司不斷增長的業務需求。
哪些狀況已經發生了改變?
現如今,市場上的各種公共雲服務已被充分證明是安全可靠的了。我們只需看看大規模的服務供應商即可——包括諸如亞馬遜網絡服務、微軟公司的Azure、谷歌雲平臺、IBM公司的SoftLayer等等。可以想見的是:這些巨頭中的任何一家都會打賭稱他們絕不會提供任何不安全、不可靠的服務。
現如今的互聯網提供了一種高速的、全球範圍內的數據分配系統,該系統可靠,無處不在且極具成本效益。與上個世紀不同,互聯網現在通過全球戰略定位的公共雲服務數據中心連接着世界上幾乎任何地方的任何人。
您企業是什麼時候決定放棄自有數據中心的?
從您企業自己部署的本地內部數據中心遷移到基於公共雲服務的數據中心的最爲智能的方法是逐步評估所要遷移的每項數據和工作負載。這將花費相當一段時間,而在此期間,您企業的運營將成爲一個混合環境。
所以,您企業所採取的第一步是從內部部署環境過渡到混合部署環境。隨着您企業逐步將相關的每項數據工作負載和應用程序遷移到公共雲服務,您企業將繼續成爲混合環境。然而,可能會有這樣的一天,當您將大量數據中心的工作負載遷移到雲服務中時,您企業的業務不再需要這麼大規模的數據中心了。屆時您企業完全可以選擇逐漸減小數據中心的規模,因爲這樣更有意義。
最終,您企業可以從您自己的數據中心消除所有應用程序,所有數據和所有工作負載,並將每款應用程序和相關的數據都遷移到公共雲服務基礎架構——其可能是一家服務提供商,但更有可能是跨多家服務提供商。通過優先考慮每項工作負載和每款應用程序的最佳狀態來創建雲遷移規劃路線圖。遷移完所有應用程序和數據後,您企業就可以放棄自有的數據中心了。
到那時,貴公司已經被使用了多年的數據中心相關設備,已然通過生產性使用和財務折舊收回了成本。無需再續訂購買相關的軟件許可證授權了。您數據中心的所有投資都獲得了豐厚的回報,現在您只需支付可預測的,可預算的,並且是顯著降低成本的運營費用即可,從而節省了大量不可思議的成本。您企業可以在不發生業務中斷的情況下全部遷移到雲服務,而不會存在任何數據丟失的風險,同時您企業對於所有這些數據也是完全可以控制的,因爲您企業已經花費了相當長的一段時間將其從內部部署環境遷移過渡到混合雲部署。
您企業是否急於擺脫數據中心業務呢?是否試圖急於找到如何資助這一舉措?我們建議您企業可以遵循許多其他企業的成功經驗:不再更新舊的,過時的存儲設備上的維護合同或再次支付舊服務器更新的費用,而是使用這筆預算資金來資助您企業的雲遷移項目。
這也是大多數企業客戶處理雲轉型的方式——使用原本用於硬件和軟件更新的資金預算用於雲服務遷移項目,並放棄自有的數據中心。通常首先放棄災難恢復(DR)數據中心,並將備份數據到雲服務中,然後將應用程序及其數據遷移到雲中,並關閉其餘數據中心。
過去幾年中,成千上萬的企業客戶已經通過上述過程實現了過渡發展。有些企業現在已經完全採用公共雲服務來託管其數據和業務應用程序了。更多的企業則是關閉了他們的災難恢復數據中心,並希望在未來幾年減少或關閉其餘的數據中心。大多數企業客戶處於混合雲採用的各個階段,並且剛剛開始將關鍵業務工作負載遷移到雲服務中。他們通過充分利用雲服務中所提供的機器學習、數據分析和其他高級IT服務,進而得以專注於其現代化的核心業務。
最終,大多數企業客戶將放棄自有的數據中心,完全退出數據中心硬件和IT基礎架構軟件的維護業務,因爲這根本不是這些企業業務的核心。
今天,互聯網提供了可靠,安全的數據分發系統,以及增強了專用網絡所需的性能。與此同時,廣大的企業客戶將能夠按照自己的發展進度繼續其雲服務的轉型之旅,並保持混合模式,直到有一天您企業已經不再有公司自有或租用的主機託管數據中心,此時您就得以利用IT作爲優勢來爲您企業的業務提供運營環境了,進而使您能夠專注於您最擅長的領域——您企業真正的核心業務。
(繼續閱讀...)
文章標籤

hank 發表在 痞客邦 留言(0) 人氣(97)

  • 個人分類:Data centre
▲top
  • 8月 11 週六 201809:35
  • Datacenter migration using liquid cooling obstacles

Datacenter migration
Datacenter migration,The rise of machine learning has led to higher and higher power densities in data centers, where a large number of servers are deployed, with power densities ranging from 30 kW to 50 kW per rack, prompting some data center operators to switch to liquid cooling instead of air cooling.
Although some data center operators use liquid cooling to improve the efficiency of their facilities, the main reason is the need to cool more power-intensive racks.
But the conversion from air cooling to liquid cooling is not simple. Here are some of the major obstacles encountered in using liquid cooling technology in data centers:
1. two cooling systems are required.
Lex Coors, chief technology officer of data centers at European hosted data center giant Interxion, says it makes little sense for existing data centers to switch to liquid cooling at one time, and the operations teams at many data center facilities will have to manage and operate two cooling systems, not one.
This makes liquid cooling a better choice for new data centers or data centers that require major modifications.
But there are always exceptions, especially for very large manufacturers, whose unique data center infrastructure problems often require unique solutions.
Google, for example, is currently converting air-cooling systems from many of its existing data centers into liquid-cooling systems to cope with the power density of its TPU 3.0 processor, which its latest machine learns.
2. lack of industry standards
The lack of liquid cooling industry standards is a major obstacle to the widespread adoption of the technology.
"Customers must first have their own IT equipment for liquid cooling." "And the standardization of liquid cooling technology is not perfect, and organizations can't simply adopt it and make it work," Coors said.
Interxion's customers do not currently use liquid cooling technology, but Interxion is prepared to support it if necessary, Coors said.
3. electric shock hazard
Many liquid cooling solutions rely mainly on dielectric liquids, whose medium should be non-conductive and free from electrical shock hazards. But some organizations may use cold water or warm water for cooling.
"If a worker happens to touch the liquid at the moment it leaks, there's a risk of electrical shock and death, but there are many ways to deal with it," Coors said.
4. corrosion
Corrosion, like any system involving liquid pipes, is a major problem facing liquid cooling technology.
"Pipeline corrosion is a big problem, which is one of the problems that people need to solve." Coors said. Liquid cooling manufacturers are improving pipes to reduce the risk of leakage and automatically seal pipes in case of leakage.
He added, "at the same time, the rack itself also needs to be containerized. If there is a leak, just sprinkle the liquid on the rack, so there is no great harm. "
5. operational complexity
Jeff Flanagan, executive vice president of Markley Group, said the biggest risk of using liquid cooling might be increased operational complexity, and the company plans to launch liquid cooling services in high-performance cloud computing data centers early next year.
As data center operators, we prefer simple technologies, and the more components we have, the more likely we are to fail. When using liquid cooling technology to cool the chip, the liquid flows through each CPU or GPU in the server, requiring many components to be added to the cooling process, which increases the possibility of failure.
In operating data centers, there is another complication: immersing servers in dielectric fluids, which requires higher insulation technology.
(繼續閱讀...)
文章標籤

hank 發表在 痞客邦 留言(0) 人氣(0)

  • 個人分類:Data centre
▲top
  • 8月 11 週六 201809:35
  • Datacenter migration using liquid cooling obstacles

Datacenter migration
Datacenter migration,The rise of machine learning has led to higher and higher power densities in data centers, where a large number of servers are deployed, with power densities ranging from 30 kW to 50 kW per rack, prompting some data center operators to switch to liquid cooling instead of air cooling.
Although some data center operators use liquid cooling to improve the efficiency of their facilities, the main reason is the need to cool more power-intensive racks.
But the conversion from air cooling to liquid cooling is not simple. Here are some of the major obstacles encountered in using liquid cooling technology in data centers:
1. two cooling systems are required.
Lex Coors, chief technology officer of data centers at European hosted data center giant Interxion, says it makes little sense for existing data centers to switch to liquid cooling at one time, and the operations teams at many data center facilities will have to manage and operate two cooling systems, not one.
This makes liquid cooling a better choice for new data centers or data centers that require major modifications.
But there are always exceptions, especially for very large manufacturers, whose unique data center infrastructure problems often require unique solutions.
Google, for example, is currently converting air-cooling systems from many of its existing data centers into liquid-cooling systems to cope with the power density of its TPU 3.0 processor, which its latest machine learns.
2. lack of industry standards
The lack of liquid cooling industry standards is a major obstacle to the widespread adoption of the technology.
"Customers must first have their own IT equipment for liquid cooling." "And the standardization of liquid cooling technology is not perfect, and organizations can't simply adopt it and make it work," Coors said.
Interxion's customers do not currently use liquid cooling technology, but Interxion is prepared to support it if necessary, Coors said.
3. electric shock hazard
Many liquid cooling solutions rely mainly on dielectric liquids, whose medium should be non-conductive and free from electrical shock hazards. But some organizations may use cold water or warm water for cooling.
"If a worker happens to touch the liquid at the moment it leaks, there's a risk of electrical shock and death, but there are many ways to deal with it," Coors said.
4. corrosion
Corrosion, like any system involving liquid pipes, is a major problem facing liquid cooling technology.
"Pipeline corrosion is a big problem, which is one of the problems that people need to solve." Coors said. Liquid cooling manufacturers are improving pipes to reduce the risk of leakage and automatically seal pipes in case of leakage.
He added, "at the same time, the rack itself also needs to be containerized. If there is a leak, just sprinkle the liquid on the rack, so there is no great harm. "
5. operational complexity
Jeff Flanagan, executive vice president of Markley Group, said the biggest risk of using liquid cooling might be increased operational complexity, and the company plans to launch liquid cooling services in high-performance cloud computing data centers early next year.
As data center operators, we prefer simple technologies, and the more components we have, the more likely we are to fail. When using liquid cooling technology to cool the chip, the liquid flows through each CPU or GPU in the server, requiring many components to be added to the cooling process, which increases the possibility of failure.
In operating data centers, there is another complication: immersing servers in dielectric fluids, which requires higher insulation technology.
(繼續閱讀...)
文章標籤

hank 發表在 痞客邦 留言(0) 人氣(3)

  • 個人分類:Data centre
▲top
  • 8月 10 週五 201812:05
  • 機房建置,保障機房服務器系統安全!

機房建置
機房建置,隨着IT技術的革新,各種病毒層出不窮,黑客們的花招也越來越多。而處於互聯網這個相對開放環境中的服務器遭受的風險比以前更大了。越來越多的服務器攻擊、服務器安全漏洞,以及商業間諜隱患時刻威脅着服務器安全。服務器的安全問題越來越受到關注,我們要如何障服務器的安全呢?下面天互數據將爲大家提供七個維護服務器安全的技巧。
1.從基本做起,及時安裝系統補丁
不論是Windows還是Linux,任何操作系統都有漏洞,及時的打上補丁避免漏洞被蓄意攻擊利用,是服務器安全最重要的保證之一。
2.安裝和設置防火牆
現在有許多基於硬件或軟件的防火牆,很多安全廠商也都推出了相關的產品。對服務器安全而言,安裝防火牆非常必要。防火牆對於非法訪問具有很好的預防作用,但是安裝了防火牆並不等於服務器安全了。在安裝防火牆之後,你需要根據自身的網絡環境,對防火牆進行適當的配置以達到最好的防護效果。
3.安裝網絡殺毒軟件
現在網絡上的病毒非常猖獗,這就需要在網絡服務器上安裝網絡版的殺毒軟件來控制病毒傳播,同時,在網絡殺毒軟件的使用中,必須要定期或及時升級殺毒軟件,並且每天自動更新病毒庫。
4.關閉不需要的服務和端口
服務器操作系統在安裝時,會啓動一些不需要的服務,這樣會佔用系統的資源,而且也會增加系統的安全隱患。對於一段時間內完全不會用到的服務器,可以完全關閉;對於期間要使用的服務器,也應該關閉不需要的服務,如Telnet等。另外,還要關掉沒有必要開的TCP端口。
5.定期對服務器進行備份
爲防止不能預料的系統故障或用戶不小心的非法操作,必須對系統進行安全備份。除了對全系統進行每月一次的備份外,還應對修改過的數據進行每週一次的備份。同時,應該將修改過的重要系統文件存放在不同服務器上,以便出現系統崩潰時(通常是硬盤出錯),可以及時地將系統恢復到正常狀態。
6.賬號和密碼保護
賬號和密碼保護可以說是服務器系統的第一道防線,目前網上大部分對服務器系統的攻擊都是從截獲或猜測密碼開始。一旦黑客進入了系統,那麼前面的防衛措施幾乎就失去了作用,所以對服務器系統管理員的賬號和密碼進行管理是保證系統安全非常重要的措施。
7.採用熱/冷通道的設計方式來分佈數據中心的設備
雖然這個技術在上世紀90年代中期就已經有了,但這是一種有效的方式。這種設計使冷空氣通過這個通道直達服務器前面的通風口,並且能夠使來自服務器後面AC電源的熱氣流通過管道排除,這樣就大大節省了降溫所帶來的能耗。
8.監測系統日誌
通過運行系統日誌程序,系統會記錄下所有用戶使用系統的情形,包括最近登錄時間、使用的賬號、進行的活動等。日誌程序會定期生成報表,通過對報表進行分析,你可以知道是否有異常現象。
服務器安全問題是一個大問題。如果你不希望重要的數據被病毒或是黑客破壞,甚至被可能用這些數據來對付你的人竊取,那麼本文介紹的安全小技巧可能會對你有所幫助。
(繼續閱讀...)
文章標籤

hank 發表在 痞客邦 留言(0) 人氣(5)

  • 個人分類:Data centre
▲top
  • 8月 10 週五 201812:05
  • 機房建置,保障機房服務器系統安全!

機房建置
機房建置,隨着IT技術的革新,各種病毒層出不窮,黑客們的花招也越來越多。而處於互聯網這個相對開放環境中的服務器遭受的風險比以前更大了。越來越多的服務器攻擊、服務器安全漏洞,以及商業間諜隱患時刻威脅着服務器安全。服務器的安全問題越來越受到關注,我們要如何障服務器的安全呢?下面天互數據將爲大家提供七個維護服務器安全的技巧。
1.從基本做起,及時安裝系統補丁
不論是Windows還是Linux,任何操作系統都有漏洞,及時的打上補丁避免漏洞被蓄意攻擊利用,是服務器安全最重要的保證之一。
2.安裝和設置防火牆
現在有許多基於硬件或軟件的防火牆,很多安全廠商也都推出了相關的產品。對服務器安全而言,安裝防火牆非常必要。防火牆對於非法訪問具有很好的預防作用,但是安裝了防火牆並不等於服務器安全了。在安裝防火牆之後,你需要根據自身的網絡環境,對防火牆進行適當的配置以達到最好的防護效果。
3.安裝網絡殺毒軟件
現在網絡上的病毒非常猖獗,這就需要在網絡服務器上安裝網絡版的殺毒軟件來控制病毒傳播,同時,在網絡殺毒軟件的使用中,必須要定期或及時升級殺毒軟件,並且每天自動更新病毒庫。
4.關閉不需要的服務和端口
服務器操作系統在安裝時,會啓動一些不需要的服務,這樣會佔用系統的資源,而且也會增加系統的安全隱患。對於一段時間內完全不會用到的服務器,可以完全關閉;對於期間要使用的服務器,也應該關閉不需要的服務,如Telnet等。另外,還要關掉沒有必要開的TCP端口。
5.定期對服務器進行備份
爲防止不能預料的系統故障或用戶不小心的非法操作,必須對系統進行安全備份。除了對全系統進行每月一次的備份外,還應對修改過的數據進行每週一次的備份。同時,應該將修改過的重要系統文件存放在不同服務器上,以便出現系統崩潰時(通常是硬盤出錯),可以及時地將系統恢復到正常狀態。
6.賬號和密碼保護
賬號和密碼保護可以說是服務器系統的第一道防線,目前網上大部分對服務器系統的攻擊都是從截獲或猜測密碼開始。一旦黑客進入了系統,那麼前面的防衛措施幾乎就失去了作用,所以對服務器系統管理員的賬號和密碼進行管理是保證系統安全非常重要的措施。
7.採用熱/冷通道的設計方式來分佈數據中心的設備
雖然這個技術在上世紀90年代中期就已經有了,但這是一種有效的方式。這種設計使冷空氣通過這個通道直達服務器前面的通風口,並且能夠使來自服務器後面AC電源的熱氣流通過管道排除,這樣就大大節省了降溫所帶來的能耗。
8.監測系統日誌
通過運行系統日誌程序,系統會記錄下所有用戶使用系統的情形,包括最近登錄時間、使用的賬號、進行的活動等。日誌程序會定期生成報表,通過對報表進行分析,你可以知道是否有異常現象。
服務器安全問題是一個大問題。如果你不希望重要的數據被病毒或是黑客破壞,甚至被可能用這些數據來對付你的人竊取,那麼本文介紹的安全小技巧可能會對你有所幫助。
(繼續閱讀...)
文章標籤

hank 發表在 痞客邦 留言(0) 人氣(0)

  • 個人分類:Data centre
▲top
  • 8月 10 週五 201811:18
  • Datacenter migration, how to deal with old servers in the computer room?

Datacenter migration
Datacenter migration may cause a lot of people to worry about how to handle their old server hardware. Why is it in July 14th? This is the last date Microsoft's support for Windows Server 2003. It is said that in China, about 40% of the servers are running the system that is going to retire. It is believed that more and more old systems will be upgraded during this period of time, but there will also be a lot of server hardware running Windows Server 2003 ready to retire.
The old server hardware can not be simply lost. Discarding the server hardware arbitrarily can not only pollute the environment, but also cause the loss of data. So how do we deal with the old server hardware that we have retired?
There are several options to consider:
1. donor organizations
If your business and move to newer hardware, then old devices can find a good place to use, rather than throw it into the garbage heap. The application of these equipment by a good enterprise management not only solves the problem of old equipment, but also promotes the corporate social responsibility image.
Take the Electronic Recycling Association, for example. There are some nonprofit organizations around the world that will take the equipment you throw away and do something good for it.
2. second hand market
Just like when you have a new smartphone and sell your old iPad, you can sell these old server devices to the second-hand market.
You'll probably find that some enthusiasts, even small businesses, can turn servers into home entertainment streaming systems or run SharePoint systems.
If you don't want the buyer to bargain, you can also sign an agreement with a potential buyer, who will be responsible for recycling the old equipment.
3. responsible person handling.
If you and your server are really at the end of practicality, but you're paranoid that you don't want to leave these devices to anyone, you need to deal with the e-waste yourself.
Handling e-waste is not simply throwing it out. E-waste can do incredible harm to the environment.
Anyway, when you choose to give up using these devices, you need to specially destroy your hard drive data to prevent someone with ulterior motives from stealing your company's data, because some services can use your old hard drive to recover your data.
(繼續閱讀...)
文章標籤

hank 發表在 痞客邦 留言(0) 人氣(0)

  • 個人分類:Data centre
▲top
  • 8月 10 週五 201811:18
  • Datacenter migration, how to deal with old servers in the computer room?

Datacenter migration
Datacenter migration may cause a lot of people to worry about how to handle their old server hardware. Why is it in July 14th? This is the last date Microsoft's support for Windows Server 2003. It is said that in China, about 40% of the servers are running the system that is going to retire. It is believed that more and more old systems will be upgraded during this period of time, but there will also be a lot of server hardware running Windows Server 2003 ready to retire.
The old server hardware can not be simply lost. Discarding the server hardware arbitrarily can not only pollute the environment, but also cause the loss of data. So how do we deal with the old server hardware that we have retired?
There are several options to consider:
1. donor organizations
If your business and move to newer hardware, then old devices can find a good place to use, rather than throw it into the garbage heap. The application of these equipment by a good enterprise management not only solves the problem of old equipment, but also promotes the corporate social responsibility image.
Take the Electronic Recycling Association, for example. There are some nonprofit organizations around the world that will take the equipment you throw away and do something good for it.
2. second hand market
Just like when you have a new smartphone and sell your old iPad, you can sell these old server devices to the second-hand market.
You'll probably find that some enthusiasts, even small businesses, can turn servers into home entertainment streaming systems or run SharePoint systems.
If you don't want the buyer to bargain, you can also sign an agreement with a potential buyer, who will be responsible for recycling the old equipment.
3. responsible person handling.
If you and your server are really at the end of practicality, but you're paranoid that you don't want to leave these devices to anyone, you need to deal with the e-waste yourself.
Handling e-waste is not simply throwing it out. E-waste can do incredible harm to the environment.
Anyway, when you choose to give up using these devices, you need to specially destroy your hard drive data to prevent someone with ulterior motives from stealing your company's data, because some services can use your old hard drive to recover your data.
(繼續閱讀...)
文章標籤

hank 發表在 痞客邦 留言(0) 人氣(5)

  • 個人分類:Data centre
▲top
«123...178»

個人資訊

hank
暱稱:
hank
分類:
不設分類
好友:
累積中
地區:

個人資訊

hank
暱稱:
hank
分類:
不設分類
好友:
累積中
地區:

熱門文章

  • (1,139)機房建置裝修標準規範要求有哪些?
  • (871)機房機櫃專用UPS不斷電系統能使用多久?
  • (472)機房工程中高架地板的作用與種類
  • (61)數據中心建置三相電源和單相電源的應用區別
  • (34)數據中心內部環境清掃三原則
  • (16)消除機房建置熱點的各種技術措施
  • (6)論彈性技術在機房建置裏的表現
  • (5)What is the cost of data center migration downtime?
  • (3)Website design and production: new ideas about website background
  • (1)機房建置運維的水平發展路標

熱門文章

  • (1,139)機房建置裝修標準規範要求有哪些?
  • (871)機房機櫃專用UPS不斷電系統能使用多久?
  • (472)機房工程中高架地板的作用與種類
  • (61)數據中心建置三相電源和單相電源的應用區別
  • (34)數據中心內部環境清掃三原則
  • (16)消除機房建置熱點的各種技術措施
  • (6)論彈性技術在機房建置裏的表現
  • (5)What is the cost of data center migration downtime?
  • (3)Website design and production: new ideas about website background
  • (1)機房建置運維的水平發展路標

文章分類

  • Data centre (558)
  • Data centre (558)
  • Website design (327)
  • Website design (327)
  • 未分類文章 (1)

文章分類

  • Data centre (558)
  • Data centre (558)
  • Website design (327)
  • Website design (327)
  • 未分類文章 (1)

最新文章

  • 機房建置,數據中心四大要求
  • 機房建置,數據中心四大要求
  • Datacenter migration, peripheral security issues
  • Datacenter migration, peripheral security issues
  • 機房建置,虛擬化降低數據中心存儲系統運維複雜度
  • 機房建置,虛擬化降低數據中心存儲系統運維複雜度
  • Datacenter migration must pay close attention to peripheral security.
  • Datacenter migration must pay close attention to peripheral security.
  • 機房建置,數據中心業務價值如何延續?
  • 機房建置,數據中心業務價值如何延續?

最新文章

  • 機房建置,數據中心四大要求
  • 機房建置,數據中心四大要求
  • Datacenter migration, peripheral security issues
  • Datacenter migration, peripheral security issues
  • 機房建置,虛擬化降低數據中心存儲系統運維複雜度
  • 機房建置,虛擬化降低數據中心存儲系統運維複雜度
  • Datacenter migration must pay close attention to peripheral security.
  • Datacenter migration must pay close attention to peripheral security.
  • 機房建置,數據中心業務價值如何延續?
  • 機房建置,數據中心業務價值如何延續?

最新留言

  • [22/04/27] 訪客 於文章「機房機櫃專用UPS不斷電系統能使用多久?...」留言:
    原本在搜尋引擎找出一堆 Blog 文章,不知哪幾篇值得花時間...

最新留言

  • [22/04/27] 訪客 於文章「機房機櫃專用UPS不斷電系統能使用多久?...」留言:
    原本在搜尋引擎找出一堆 Blog 文章,不知哪幾篇值得花時間...

動態訂閱

動態訂閱

文章精選

文章精選

文章搜尋

文章搜尋

誰來我家

誰來我家

參觀人氣

  • 本日人氣:
  • 累積人氣:

參觀人氣

  • 本日人氣:
  • 累積人氣: