網頁

2010年3月29日星期一

Google is building a private Internet that's far better, and greener, than the Internet...

http://blogs.zdnet.com/Foremski/?p=1266&tag=nl.e539
Posted by Tom Foremski @ 10:00 pm
The Internet is huge but it’s a hodgepodge of hundreds of thousands of smaller, private networks, connected through thousands of Internet Service Providers (ISPs) and dozens of backbones operated by the large Telcos and service providers.
Moving data from one end of the Internet to the other can mean traveling across many different computers and different networks. Some of these computers and networks are old and inefficient while some are modern and very efficient.
They are all tied together into what we call the Internet, through a collection of standards. These standards determine how a packet of data can reach its destination, complete and undamaged.
Many large Internet companies own large chunks of the Internet through building their own data centers, networks, backbones, etc. This helps to keep their costs down.
Google is big…
Google is one of those companies that owns a large chunk of the Internet. It has more than 50 data centers around the world; it builds its own servers; it operates its own backbones that shuttle huge amounts of data across the world; it develops its own software for managing all of its data; it keeps banks of servers in the data centers of ISPs so that it can cache data closer to delivery; and more, much more.
How big is Google? asks Arbor Networks. It’s a rhetorical question because Arbor knows, it sells network control and monitoring hardware used by the largest ISPs and corporations.
Arbor says that Google is very big:
I mean really big. If Google were an ISP, it would be the fastest growing and third largest global carrier. Only two other providers (both of whom carry significant volumes of Google transit) contribute more inter-domain traffic. But unlike most global carriers (i.e. the “tier1s”), Google’s backbone does not deliver traffic on behalf of millions of subscribers nor thousands of regional networks and large enterprises. Google’s infrastructure supports, well, only Google.
Based on data from 110 ISPs collected in the summer of 2009, Google was responsible for as much as 10% of all Internet traffic.
If a company wants to compete with Google on a large scale, the costs of shuttling data packets around, whether they be Twitter packets or video packets, starts becoming very important at these large scales.
Arbor says:

The competition between Google, Microsoft, Yahoo and other large content players has long since moved beyond just who has the better videos or search. The competition for Internet dominance is now as much about infrastructure — raw data center computing power and about how efficiently (i.e. quickly and cheaply) you can deliver content to the consumer.
And that’s why Google has focused on building the most efficient, lowest cost to operate, private Internet. This infrastructure is key to Google, and it’s key to understanding Google.
The cost of aluminum…
Google will locate its massive data centers where electricity costs are low, such as where there is hydro-electric power. There’s a shortcut to finding these locations, look for places where there are aluminum smelters — these use huge amounts of electricity.
Google was one of the first companies to realize that electric power costs would be important in determining the cost of data centers. Today, it is high on the list of priorities for all data centers. That’s also why it has been investing in power generating technologies, such as wind, sun, and geothermal.
It has a key goal of generating electric power from renewable energy sources at a cost less than coal-generated electric power. That would be an incredible achievement.
Always lower costs…
Google always focuses on finding the lowest costs even though it can easily afford to pay more. Google builds its own servers, made from off-the-shelf low cost components, with cheap hard drives. It has developed its own software that deals with component failure and moves work loads across huge numbers of servers. Managing failure is built into Google’s data center operating systems.
It has bought up lots of “dark fiber,” at a very low cost. This is optical fiber that hasn’t yet been ‘lit’ but it is in the ground, in place, ready to be hooked up.
Because Google has so much fiber, it operates one of the largest backbones in the world. It also means that it can trade bandwidth with others.
Large Telcos and ISPs have peering arrangements with each other. This means that if they have the capacity, they will carry extra traffic for each other. These peering arrangements mean that Google’s bandwidth bill for all that YouTube video is zero.
It’s difficult to believe, but your bandwidth bill to watch a YouTube video is more than Google’s. Because of bartering through peering agreements, its only cost is in maintaining its own networks and backbones.
Skipping the last mile…
Google still needs ISPs and Telcos for the last mile, to deliver its various services and products, to the end user/consumer. But it has been experimenting with going direct.
It has experimented with free municipal Wi-Fi, and more recently, it is setting up high speed bandwidth to communities with 500,000 people or less.
This doesn’t necessarily mean that Google wants to become an ISP or a Telco. It is not a service organization and it doesn’t want that headache, but it does want to spur ISPs and Telcos to develop high-speed data connections, so that it can deliver future products and services that require high speed data.
The Internet is becoming ever more Google’s…
Googles growth means that it is building a much faster, and much more power efficient, and much greener Internet. And through peering agreements, it is carrying much more than just Google traffic, it is quickly, and quietly becoming an important carrier for all Internet traffic.
There are huge indirect benefits from Google’s work that make the Internet a better service for every Internet user.
Essential facility…
What will this lead to? It’s going to lead to regulatory scrutiny because Google will be increasingly seen as an ‘essential facility’ vital for the economies of regions, nations, and entire trading blocs.
Increased scrutiny by governments, and regulatory bodies, will make it more difficult for Google to execute on its business strategies. Combined with the increased scrutiny of Google’s acquisitions by the Federal Trade Commission, Google’s future ambitions will become ever more restricted.
Google sees the writing on the wall. It has boosted how much it spends on lobbying in Washington. [Antitrust Heat -- Google Spends Millions To Influence Washington - SVW]
A layer cake business…
Google might decide that its value lies in its incredibly efficient infrastructure, which is far more efficient and lower cost than the Internet as a whole.
Once you have the lowest cost infrastructure, you can layer and scale other business services on top. Such as payment systems, basic voice and data services, security systems, and commerce platforms (advertising).
Google might decide it doesn’t need to own a Facebook, Twitter a Yahoo, or an Amazon — when it can host all the data packets. It can carry and trace a data packet from source to destination and back again — it can mine all that transactional data. That’s extremely valuable.
It’s a little known fact that Google keeps all of its data, all transactional data. It erases part of the identifiable meta data, but that can be reconstructed. [Google Keeps Your Data Forever - Unlocking The Future Transparency Of Your Past - SVW]
That transactional data is incredibly valuable, and even though we can’t unlock it to its fullest value today, Google is working on it.
No umbrella…
By being able to build the most efficient, private Internet, Google makes it extremely difficult for any competitor to challenge it. There is no ‘price umbrella’ that competitors can use.
For example, there used to be lots of mainframe computer companies because IBM, the largest mainframe computer maker, used to charge very high prices. There was a substantial price umbrella set by IBM that sheltered competitors, and allowed them to sell IBM compatible mainframes and still make a good living.
You can see similar price umbrellas in other business sectors.
Google has made sure that by building the most efficient, lowest cost infrastructure, there is no price umbrella that could be exploited by competitors. It’s more like a manhole cover, try to get under it, and you fall into a hole…
This strategy means that Google leaves money on the table, it could make more money over the short-term by creating a price umbrella. Instead, it has chosen a long term business strategy which doesn’t give competitors any toehold, let alone an umbrella.
Its stock ownership is set up so that founder’s stock has ten times the voting rights of public shares, this allows it to avoid shareholder pressure to pursue short-term business goals.
This all adds up to make Google into a truly formidable force, and one that continually amasses greater powers and influence. ‘Do no evil’ is the very least it can do.

2010年3月26日星期五

常識你識唔識

你識唔識﹕選擇

(明報)2010年3月26日 星期五 05:10【明報專訊】

1.兩會中,國務院總理溫家寶指出中央正在規劃 「十二.五」規劃,過程中會考慮香港與內地的緊密經濟聯繫,特別是香港與哪裏的聯繫?
A.珠三角地區
B.長三角地區
C.廣州
D.深圳

2.國家副主席習近平會見港區人大代表時, 提出要正確認識並準確把握「一國」和「兩制」緊密相聯的關係,指出港人要正確理解《基本法》, 強調在「維護香港的高度自治」之餘,還有要維護什麼?
A.香港精神
B.香港的社會和諧
C.國家政府領導層
D.國家統一和安全

3.哪名港區人大在兩會中提出香港應盡快為23條立法?
A.王敏剛
B.黃國健
C.史美倫
D.蔡素玉

4.立法會主席曾鈺成安排全體議員 5 月前往內地考察,但多名泛民議員要求在5月16日補選後才出發。他們所參與的是哪項大型活動 ?
A.亞運
B.兩會
C.政制發展研討會
D.世界博覽會

5.3月中房協將所有夾屋餘貨推出市場,包括叠翠軒、浩景臺及欣圖軒。當中被譽為夾屋樓王的欣圖軒,位於哪個地方?
A.藍田
B.何文田
C.沙田
D.錦田

6.2010年世界盃轉播權落入有線電視手 上,無線及亞視洽購不果,或未能轉播4場重要賽事。無線及亞視對事件作出什麼行動?
A.致函國際足協查詢是否違規
B.向其他電視台購入片段播放
C.開出更高價錢洽購
D.改以網上頻道轉播

7.在青年廣場開幕禮、青年高峰會中向唐英年擲鞋的男子,報稱從事什麼行業?
A.運輸業
B.飲食業
C.商人
D.失業

8.長沙灣麗昌工廠大廈發生四級火,造成消防員1死3傷,經調查後發現在處理火警的過程中,消防處控制室人員犯了什麼錯誤?
A.擅離職守
B.誤刪火警升級信息
C.當值時打瞌睡
D.錯估火場狀况

9.菲律賓旅行團懷疑受騙,滯留本港6小時後,由哪名立法會議員協助入住土瓜灣酒 店?
A.李卓人
B.謝偉俊
C.田北俊
D.曾鈺成

10.廉署拘捕無線電視涉貪者,懷疑以「食差價」方式取得利益。被捕的無線高層中,包括陳志雲、 陳永孫及哪人?
A.叢培崑
B.寧進
C.錢國偉
D.梁志昌

11.市建局宣布放棄重建哪條街道,保留《歲月神偷》的拍攝場地?
A.永利街
B.士丹頓街
C.結志街
D.嘉咸街

12.身高74.6厘米(2呎5吋)的全球最矮男人何平平,日前在意大利羅馬拍攝節目時突感心臟不適,3月中在醫院病逝,終年21歲。他是在中國哪個省份出生的?
A.黑龍江
B.雲南
C.新疆
D.內蒙古

13.國務院總理溫家寶與網民在線上交流,接獲問題逾10萬條。溫總用哪4個字來形容自己屬於人民?
A.公共財產
B.平民總理
C.走入群眾
D.溝通橋樑

14.外交部長楊潔篪以畫作喻,不點名地斥責以美國為首的西方國家對中國存在誤解。比喻裏,他批評西方是用看什麼畫的標準來看水墨畫?
A.水彩畫
B.油畫
C.素描
D.壁畫

15.中國與哪個國家在3月初同日簽署《哥本哈根協定》,並通知聯合國氣候變化秘書處兩國支持該協定?
A.巴西
B.俄羅斯
C.印度
D.印尼

16.中國公布新一批航天員已完成選拔,7名航天員中,有多少人是女性?
A.2人
B.3人
C.4人
D.6人

17.西班牙甲組球隊皇家馬德里擬在中國開辦足球學校,首選在哪個城市落戶?
A.秦皇島
B.上海
C.北京
D.香港

18.碧咸在代表哪支球隊出賽時左腳跟腱撕裂,令他極可能因傷缺席世界盃?
A.英格蘭 
B.洛杉磯銀河
C.皇家馬德里
D.AC米蘭

19.日本皇太子德仁的女兒懷疑在所就讀的學習院被欺凌,令她對學校裏多名男同學 的粗暴行為,感到極度不安。疑被欺凌的小公主,名字是什麼?
A.美嘉
B.美子
C.愛子
D.雅子

20.奧斯卡頒獎禮最佳影片獎項中,導演嘉芙蓮碧格露以哪齣電影擊敗大熱《阿凡 達》,成為奧斯卡史上首名最佳女導演?
A.《拆彈雄心》
B.《守護有心人》
C.《天生不是寶貝》
D.《聲聲相識》

你識唔識﹕配對

(明報)2010年3月26日 星期五 05:10  【明報專訊】

1.冰島˙
2.伊拉克˙
3.英國˙
4.泰國˙
5.墨西哥˙

˙A.國內銀行倒閉,英國及荷蘭要求賠償,總統順應民意拒簽賠償議案,並交由公投決定,賠償議案最終被否決。

˙B.當地3月中下旬出現嚴重的交通工潮。有航空公司的機艙員工於20日起一連3天大罷工,當地鐵路工人亦可能於復活節舉行16年來最大規模罷工。

˙C.國會選舉投票日,連串炸彈襲擊釀成38人死亡,但民眾投票仍踴躍,多個票站大排長龍。

˙D.3名美國領事館職員和家屬遭暗殺,疑與毒梟有關,奧巴馬對消息「感到非常傷心和憤怒」。

˙E.紅衫軍呼籲前首相他信的支持者捐出100萬毫升血液,血濺首相府表達要求解散國會下議院的訴求,但遭到當地紅十字會批評,指此舉浪費血液。

你識唔識﹕填充

(明報)2010年3月26日 星期五 05:10
【明報專訊】兩會舉行前,內地13家媒體發表「(1)_________」,要求兩會代表和委員,敦促官方加快戶籍改革,最終將其徹底消除。不過, 在文章發表後,該13家媒體即被封殺,除了相關新聞被刪除外,記者也遭訓示及辭退。兩會中,國務院總理溫家寶則以「推進戶籍制度改革」回應,措施包括放寬中小城市和小城鎮(2)_______條件,引導農村人口向小城鎮集聚等。

至於政府的工作報告,主要關注內地高房價議題,並提出針對房地產的四項政策,包括興建300萬套「(3)______性住房」,同時強調發展租賃市場,讓 買不起住房的人,也有房子可住。

此外,兩會亦為今年的人事議題作出表決,補選一名全國政協副主席,最終由唯一的候選人(4)________當選,至於外界揣測也會升任副主席的十一世 (5)______喇嘛,則不在候選名單內。

撞中密碼 法黑客侵奧巴馬Twitter帳戶

(明報)2010年3月26日 星期五 05:10
【明報專訊】法國警方在美國聯邦密探協助下,拘捕一名25歲青年,指他涉嫌以非法手段,入侵多個名人在社交網站Twitter上的帳戶,受害人包括美國總統奧巴馬和歌手Britney Spears。警方表示,被捕青年並非天才黑客,他只是利用種種蛛絲馬迹,設法「撞中」帳戶持有人的密碼。在法國,入侵他人數據資料庫是刑事罪行,最多可 囚兩年。
去年1月,多個Twitter帳戶包括奧巴馬及Britney Spears都被入侵,並發出一些虛假的古怪信息。美國聯邦密探懷疑源頭來自法國,於去年7月聯絡法國警方,兩國偵查數月後,本周二在法國中部克萊蒙費朗 市,拘捕一名25歲無業青年黑客,他與父母同住,在網上自稱「Hacker Croll」。
疑犯辯稱揭露網絡漏洞
這名黑客在法國曾詐騙約15.5萬港元。檢察官科基亞(Jean-Yves Coquillat)直言:「他只是個經常上網消磨時間的普通年輕人。他與別人打賭後,出於黑客的傲慢自大,做出這種事。他是那種會吹噓做出這種事的 人。」當局相信,他似乎只是想向旁人證明,自己有能力看一些機密資料。這名被捕法國人昨天堅稱自己並非黑客,是「仁慈的海盜」,只想找出暴露網絡安全的漏 洞。「我這樣做目的並非想破壞……我只想警告他們,顯示系統的缺點。」
去年7月,美國科技網站TechCrunch.com報稱收到一個名叫「Hacker Croll」的網民傳送過來的檔案,內裏有310份關於Twitter的機密文件和個人資料。這些資料包括有Twitter高層的會議紀錄、生意協議、財 務預測、工作日程、電話簿、辦公室平面圖,以及其他資料。
當時Twitter的創辦人威廉斯(Evan Williams)向TechCrunch表示,這些文件在一次攻擊中失去,但宣稱黑客並無取得任何Twitter帳戶的控制權。
法國警方稱,這名黑客除了攻擊多個美國名人的Twitter帳戶外,亦同時攻擊Facebook、Google以及其他營運商的電郵戶口,但他並無 透過入侵行為獲利。
網誌瑣事泄密碼線索
警方稱這名黑客並無特別的科技背景,只是靠閱讀其他人的網站及網誌上的資料,去猜想他們的帳戶密碼。
目前很多網上電郵服務為免用戶陷入忘記密碼的問題,在開戶時都會要求用家設下「秘密問題」和私人答案,例如「你的寵物名字是什麼?」、「結婚紀念日 在哪天」之類,被捕黑客就是透過蒐集資料,答對秘密問題,從而取得名人的Twitter帳戶密碼。他更成立了網誌,跟旁人吹噓他的發現。
Twitter去年8月曾因黑客攻擊而故障數小時。
法新社/英國廣播公司/每日電訊報

2010年3月20日星期六

VirnetX sues Microsoft over patents again, now taking aim at Windows 7

http://blogs.zdnet.com/microsoft/?p=5616&tag=nl.e539
March 18th, 2010 Just days after winning Round 1 of its patent-infringement case against Microsoft, VirnetX has filed another patent-infringement case against the Redmondians. This time, VirnetX is taking aim at Windows 7.
The new case, which came to light on March 18, again alleges that Microsoft is using VirnetX’s virtual private networking (VPN) patents without paying for their use. VirnetX’s original case against Microsoft, filed in 2007, cited Windows Server 2003, XP, Vista, Live Communications Server, Windows Messenger, Office Communicator and various versions of Office as infringing on two of VirnetX’s patents. The new pleading focuses on Windows 7, claiming it infringes on these same patents.
Kevin Kutz, Director of Public Affairs, said Microsoft hadn’t yet seen VirnetX’s new claim.
“While we can’t comment specifically about the new complaint because we have not been served, Microsoft respects intellectual property, and we believe our products do not infringe the patents involved.  Moreover, we believe those patents are invalid.  We will challenge VirnetX’s claims.”
A Texas jury on March 16 recommended Microsoft pay VirnetX $105.75 million for willfully infringing on two VirnetX networking patents. Microsoft officials said they are appealing that ruling.
McKool Smith, the law firm representing VirnetX is the same one that represented i4i, which won a $200-million-plus patent-infringement verdict against Microsoft. Judge Leonard Davis, the same judge who presided over the i4i case, was the judge overseeing the VirnetX case, as well.
VirnetX, a subsidiary of VirnetX Holdings, is “focused on commercializing a patent portfolio for securing real-time communications over the Internet,” explains the company in its November 10-Q.
Update: I just received more on VirnetX’s new suit, which it filed on March 17, 2010. Here’s VirnetX’s official statment:
VirnetX “filed a complaint in the Tyler Division of the Eastern District of Texas alleging infringement of U.S. Patent Nos. 6,502,135 and 7,188,180 by Microsoft’s Windows 7 and Windows Server 2008 R2 software products.”
“This is a tactical and procedural post-trial action to ensure and protect our property rights as we proceed to final resolution with Microsoft,” said Kendall Larsen, VirnetX President and CEO.

2010年3月19日星期五

比率分析

比率分析
 
財務比率主要分為: (1)流動資金比率、(2)槓桿比率、(3)盈利比率、(4)效 益比率及(5)市場價值比率,其作用是分析公司的業績表現。
 
流動資金比率是用來衡量公司的短期償債能力,顯示公司變賣或套現資產成為現金,去償還短期債項的能力。 流動資金比率越高,即時變賣或套現公司資產的機會便越大,公司所面對的財務困難便愈低。
 
流動比率 = 流動資產 / 流動負債
 
利息支持能力比率 = 利息及稅前純利 / 利 息
 
速動比率 = (流動資產 - 存貨) / 流動負債
 
槓桿比率是用來衡量公司長期的償債能力,它顯示公司債項對資本的比率及 其支付利息及其他固定費用的能力。 槓桿比率愈高,公司的負債便愈多,即 表示公司未必有足夠能力去償還債務。
 
長期負債比率 = 長 期債項 / 股東權益
 
總負債比率 = (短 期債項 + 長期債項) / 股東權益
 
盈利比率是計算公司整體的賺錢表現,及其資產、負債及資金的應用效能。
 
淨邊際利潤率 = 稅 後純利 / 營業額
 
經營邊際利潤率 = 經 營溢利 / 營業額
 
股東權益回報率 = 稅 後純利 / 股東權益
 
資產回報率 = 稅 後純利 / 總資產
 
資本運用回報率 = 稅 後純利 / (總資產 - 流動負債)
 
效益比率表示公司能否有效地運用其資產及管理全公司的運作。
 
存貨週轉率 = 營業額 / 平 均存貨
 
資產週轉率 = 營業額 / 總 資產
 
市場價值比率的作用是比較不同公司的價值,它們不會出現在財務報表 上,而且只有上市的公司才可計算出市場價值比率。
 
市盈率 = 每 股現價 / 每股盈利
 
市場對賬面比率 = 股 東權益之現價 / 股東權益之賬面價
 
 
 
銀行比率是評估銀行資本的充足度,保證銀行有充足 的現金流量,及符合所有合時義務和成本效益。
 
資本充足比率 = 股 本基礎 (第一級資本 + 第二級資本) / 風險加權資產
 
第一級資本比率 = 第 一級資本 / 總資本
 
流動資金比率 = 流 動資產 / 限定債務
 
成本對收入比率 = 經 營支出 / 總經營收入
 
根據香港金融管理局訂定之綜合基準及《銀行條 例》,所有在香港經營之銀行須有8%以上之資本充足比率、4%以上之第一級資本比率及25%以上之流動資金比率。
 
流動資產主要包括一個月內到期之銀行同業存款、 港幣及外幣、金、有價證券及一個月內到期之墊款。
 
限定債務主要是指在一個月內到期之銀行同業負債 及其他負債。
 
第一級資本包括普通股、保留溢利、繳入股本及資 本儲備。
 
第二級資本包括貸款虧損儲備或不公開的資本儲 備、最少為期20年的優先股、重估儲備及一般貸款撥備及最少為期7年 的後償債務。
 
 
每股盈利 = 稅 後純利 / 已發行的普通股數目
 
每股派息 = 已 派股息 / 已發行的普通股數目
 
每股資產淨值 = (總 資產 - 總負債) / 已發行的普通股數目

銀行風險指標:資本適足率

資本適足定比率反映銀行資本作為衝銷壞賬損失的程度。
當局透過該比率間接限制銀行之投資風險,避免銀行業過度借貸。
銀行之資本充足比率過低,將限制其借貸業務發展,甚至需變賣資本以增加資本。
貝爾斯登以超低價“賤賣”資產,足以反映其財政狀況有多糟糕。市場不禁憂慮銀行同業現時的財政狀況恐怕較預期為差。彭博社報導指出,受次按相關投資 大幅貶值的拖累,美國10大銀行之資本適足比率(Capital Adequacy Ratio,簡稱CAR)跌至17年以來的新低,一級資本比率亦由8.7%至7.3%。
投資風險大或損經營能力
研究銀行財務狀況,CAR是一個不可不看的營運指標。此比率主要用來衡量銀行資本作為衝銷壞賬損失的保護程度。銀行主要從事借貸業務,經常面對不良 貨款及壞帳的風險。若銀行大規模地經營他們無法承擔的險業務或作高風險投資,而令壞賬大量湧現,銀行需就未能收回貸款或虧損作出撇賬,令盈利減少,甚或出 現資不抵債的情況,最終會損害銀行繼續經營的能力,令存款者利益受損。
避免過份借貸
為避免上述情況出現‧巴塞銀行監管委員會(Basel Committee On Bank Supervision)便訂立《新舊巴塞爾資本協定》(BaselⅠ & Ⅱ),規定CAR必須維寺在8%或以上,一級資本充足比率則在4%或以上,藉此作為當局限制銀行之投資風險的指標,可避免銀行業過度借貸。
而資本充足比率的計算方法如下:
第一級資本(Tier1)4第二級資本(Tier2)除風險加權資產再乘100%
第一級資本又稱為核心資本(cor ecapital),主要包括股東權益及較安全的資本;而第二級資本則稱為補充資本(supplementary capital),包括呆壞賬撥備、非永久性優先股等。當第一級及第二級資本的總和就是總資本(capital)。
然而,單單以資本總值除以資產是不足夠的;因為銀行的大部份資產是借出的貸款,具一定風險,故必須計入風險加權因素,最後得出風險加權資產。
潛在風險愈低的資產(第一級),其風險加權便愈高(最低為0%),反之亦然(詳情請看右表)。換言之資產的投資風險愈高,風險加權資產的數值亦愈 大,為維持法例規定的最低8%,便注入需要更多資本(capital,即Tier1+Tier2)。
而扣除第二級資本的一級資本充足比率,更能衡量銀行的資本充足狀況,以反映經營風險,其計算方法大致與CAR相同:
第一級資本(Tier1)除風險加權資產再乘100%
現時美國銀行業的CAR普遍出現下降的趨勢,主要是由於它們投資的次按相關債券之評級被大幅調低,變相令相關投資風險提高,最後令風險加權資產增 加,拖低CAR。
CAR過低需增資本
若銀行的CAR貼近水平下限,便需尋求注入更多資本。就好像去年11月,花旗獲阿布札注資585億美元,便有助提昇一級資本充足比率0.5%。
然而,現時資本市場萎縮,加上股市下挫,配股集資實在並不容易,加上次按投資風險過大,不少主權基金不敢輕言注資,令尋求資金變得困難重量。若銀行 的CAR遠遠低於法例規定水平,或需加快變賣資產的速度,令資產價格快速貶值,從而影響整個金融市場的穩定性。
上文亦曾及,當局透過CAR規限銀行的貸款活動。CAR過低的銀行將不能擴大其借貸業務,除非有新一筆資本注入,否則不可批出巨額新貸款。這亦解釋 了部分美國大型銀行不是不想“放水”救助同業,而是礙於其CAR有滑落的風險,令銀行不敢舉妄動。
星洲日報/投資廣場/投資補習班‧2008.07.07
AIG救命錢流進誰的腰包
2009-03-23 13:31:05

趙剛

雖然美國國際集團(AIG)在上月請求政府救助的文件中表示,如果自己倒閉將引發比雷曼兄弟破產更為嚴重的“災難”,但事實上,AIG卻一直將獲得的政府 救助資金用來償還包括高盛在內的多家交易對手的信用違約掉期(CDS)敞口。

昨日,《華爾街日報》援引一份保密文件及消息人士的話透露,自美聯儲(Fed)展開對AIG的援助以來,該公司共向其交易對手們支付了約500億美元,用 來償還信用違約掉期(CDS)敞口。其中,高盛集團(Goldman Sachs Group)和德意志銀行(Deutsche Bank)在去年9月中旬至12月間分別收到近60億美元的付款。另外,拿到AIG救助資金的還包括美林(Merrill Lynch)、法國興業銀行(Societe Generale)、摩根士丹利(Morgan Stanley)、蘇格蘭皇家銀行(Royal Bank Of Scotland Group)和匯豐控股(HSBC Holdings)等機構。
雖然這些交易機構各自分得了多少資金目前還不清楚,但對美國的銀行高管們在政府救助資金使用問題上的批評,再度引起人們的關注。而問題是,按照AIG賬面 上現存3000億美元的信用違約掉期敞口來看,這家實際上已經喪失償還能力的保險巨頭,完全可能繼續向政府要更多納稅人的錢。

據美林証券統計,如果AIG拒絕賠付其他金融機構損失,歐洲銀行業將會出現3000億美元的壞賬。

截至上周,AIG接受的政府注資總額超過1800億美元。如《華爾街日報》報道屬實,相當於美國政府給予AIG的全部救援資金中,目前有近30%實際上幫 助的是歐洲金融機構。
AIG上月底要求美國政府再度救助時就表示,如果該集團倒閉,將引發一波連鎖倒閉潮,是比雷曼兄弟倒閉更嚴重的“災難式倒閉”。AIG宣稱,如果該集團倒 閉,不僅貨幣基金市場將再度凍結,與AIG有信用違約交換合約交易的歐洲銀行業者也將被迫進一步增資。而其他壽險同業更將因民眾的信心崩潰而同受牽連、倒 閉。

AIG在2月26日的“高度機密”文件中,向美國監管機關提出三度救助的請求。AIG表示,該集團需要美聯儲和財政部的“立即”協助,才能避免其爆發“災 難”式倒閉,掀起比雷曼兄弟倒閉更大的市場衝擊。“美國經濟是否還能承受得了AIG倒閉可能產生的強烈衝擊,包括美元可能貶值,美國政府舉債成本將上揚, 美國政府支撐該國金融體系的能力將受到質疑等等。”AIG甚至指出,全球金融系統或許都不可避免其倒閉的衝擊。美國政府隨後宣布放寬AIG先前的救助合約 條件,並且再提供300億美元新融資給該集團。

但約翰霍普金斯大學Carey商學院管理學教授Phillip Phan指出,AIG許多推論聽起來都像是猜測,沒有實際証據支持。“這是一種制造危機氣氛的手法,要讓決策者迅速答應救助要求。”

而美國政府目前實際上已擁有了AIG 80%的股份,批評人士指出,政府有義務跟蹤納稅人的錢到底流向了哪里,因為奧巴馬曾做出過政府必須公開救助資金使用情況的承諾。

2010年3月18日星期四

Smartphones uptake slow in China despite 3G and iPhone


By Owen Fletcher | Mar 16, 2010
China now has the iPhone and more big-name smartphones are due in the country, but few buyers overall are choosing smartphones despite promotion by China's mobile carriers.
 
High prices are slowing smartphone sales growth despite work to cut their prices down to around 1,000 yuan (US$146). Smartphone sales in China -- not counting sales on the country's gray market -- passed 7 million units in the final quarter of last year, but that accounted for less than 15 percent of mobile phone sales in the country, Chinese consultancy Analysys International said this week.
 
"Price is still the biggest obstacle," said Liu Ning, an analyst at technology consultancy BDA. Smartphones remain expensive because they require more powerful hardware and their makers often must pay to use their operating systems, he said.
 
China's three big carriers -- China Unicom, China Mobile, and China Telecom -- have all sought more diverse handset lineups to match expansion of their young 3G networks. China Unicom late last year started selling the iPhone, China Mobile has a deal to get a BlackBerry model with its homegrown 3G standard and China Telecom has said it is in talks to offer the Palm Pre.
 
But the carriers have also worked to fill in their lower-end handset offerings. China Mobile's chairman has said smartphone sales would get a big boost if prices drop below 1,000 yuan.
 
Many smartphones, including the iPhone, can be had for that price or less when bought with certain mobile service contracts. But the goal is more to get unsubsidized prices down to 1,000 yuan, which the carriers have yet to do for most smartphones, said Wang Liusheng, an analyst at Analysys International. The more common price range now is 1,500 yuan to 3,000 yuan, he said.
 
To boost smartphone sales, the carriers also need to widen their pool of applications and other content, such as music services or mobile TV, Wang said. Each carrier is building its own app download store to expand phone content and pull in more revenue, but the range of apps available remains small.
 
High-end handsets have also faced a rough road. China Unicom sold 100,000 iPhones in roughly six weeks after the phone's launch, well short of post-launch sales figures for carriers in other countries. Many Chinese users have instead bought cheaper versions of the iPhone from outside the country. Android smartphones are starting to appear, but their prices can be high as well. The newly launched Motorola XT701 costs 4,299 yuan on a China Unicom Web site.
 
China Mobile has developed its own mobile OS based on Android, partly to help lower the cost of its smartphones, said Liu of BDA. Taiwanese chipset vendor MediaTek entering the smartphone market will also help cut costs, Liu said.
 
MediaTek chipsets currently power many of the low-end mobile phones sold in China. Microsoft and Google have teamed with MediaTek for phone hardware packages that support the Windows Mobile 6 and Android OSes, respectively.
 
Smartphone penetration is highest in China's major cities, where residents have more buying power than their rural counterparts, said Wang of Analysys. The gradual spread of 3G service is helping boost smartphone use. But 3G remains relatively new and its coverage is best in big cities, he said.
 
"In remote areas it definitely still has a long road to travel," Wang said.
 
IDG News Service (Beijing Bureau)

2010年3月17日星期三

取代Google在華位置 微軟Bing拓內地市場

(明報)2010年3月17日 星期三 05:10
【明報專訊】外界相信Google就網絡審查等問題跟中國政府達成協議的希望已愈來愈渺茫,並開始討論誰會替代Google在中國的位置。若 Google決定關閉Google.cn網站、撤出中國,這可能為微軟    Bing 「必應」搜尋器造就機會。
華爾街日報    》引述消息人士指出,微軟已從Google中國業務部至少挖走了3人。
向Google中國挖走3員
Google宣布考慮撤出中國,微軟總裁鮑爾    默(Steve Ballmer)表明微軟打算留在中國,並繼續遵守當地法律,即意味要過濾某些政治和宗教內容。微軟的王牌Windows和Office在中國一直備受盜 版困擾,微軟亦要北京    在打擊盜版上合作。但人權組織認為,若微軟急於填補Google的空 缺,可能在美國    面臨批評。
研究公司comScore稱,自從微軟去年5月推出Bing以來,該公司在美國網絡搜索市場的佔有率已由8%提高到今年2月的11.5%。 Google同期市場佔有率則從65.5%下降到65%。但在中國,微軟並未取得什麼進展。該公司去年6月在華推出了其搜索網站cn.Bing.com, 但市場佔有率不足1%。

2010年3月16日星期二

Was Intel's x86 the "gateway drug" for Apple's ARM?

Posted by Jason Perlow @ 3:00 am March 15th, 2010
http://blogs.zdnet.com/perlow/?p=12323&tag=nl.e539

Special Report: Apple iPad
Apple’s move to the x86 Intel architecture for the Macintosh in 2005 may have only been a temporary stop on the way to its logical end-state: the acquisition of PA Semi and the creation of ARM-based personal computers. (artwork by Spidermonkey)

I have been told that I am someone who speculates a great deal. However, like anyone who tries to make predictions about the industry, such speculation is based upon observing historical behavior and analyzing current trends on order to try to develop a vision for a future state. Other friends of mine like to call this “pulling stuff out of my ass”. I’ll meet them halfway.
Click on the “Read the rest of this entry” link below for more.

If you closely examine the history of Apple, you will see that time and time again, the company makes strategic choices which allow it to increasingly take control of its customers, its ecosystem and its intellectual property. Indeed, Apple has always isolated itself from the rest of the industry, but as it has matured, it has become even more of a locked-down ecosystem.

The History

The Macintosh, Apple’s flagship computer product, has undergone quite a bit of changes since its launch in 1984. Originally, it was based on Motorola’s 68000 architecture and used custom firmware along with its proprietary operating system. Ten years later, in order to make pace with technology and performance, the Macintosh hardware architecture was changed to PowerPC and CHRP, along with other relevant OS changes.
In 1997, Apple acquired NeXT, the company that Steve Jobs founded after his ouster as Apple CEO in 1984, and NeXT’s remaining intellectual property — the OpenStep operating system and APIs — became the foundation of Mac OS X.
In 2005, When Apple could no longer extract any more performance out of the desktop-class PowerPC chips and started to fall considerably behind the PC in technology, it went to the only other architecture it could viably pursue — the Intel x86. Which brings us where we are today. In 2010.
In 2010, the Mac faces a number of problems that can only be resolved by yet another paradigm shift. One of these problems is that although the x86 Mac uses a different type of firmware than the Intel PC architecture, the Extensible Firmware Interface (EFI) — Mac hackers have been successful in being able to trick the operating system to run on much less expensive clone hardware using software-based EFI emulation on PC BIOS using modified Darwin bootloaders.
One of these hackers, Rudy Pedraza, started up a mail order business in South Florida and sold what amounted to glorified PCs running Apple’s Mac OS X. That company, Psystar, was litigated into oblivion.
While Apple through the force of its financial might was able to successfully litigate a tiny American company and make its cloning operations cease, the company still faces the real possibility that other nations with less favorable legal systems may be able to sustain businesses based on cloned Macs. And while Psystar is dead, the technology that it used to build its systems continues to be heavily developed by the clandestine Hackintosh community.
Additionally, and probably most importantly, further advances in X86 virtualization technology which permits abstraction of the OS from the hardware could potentially allow a consumer in the near future to install the Mac OS on their own PCs without a whole lot of fuss. Apple has been resisting implementing virtualization on Mac OS X, and for good reason — they don’t want to enable the people that could possibly damage their cash cow.
Based on Apple’s patterns of 10-year technology refresh cycles and the company’s increased isolationist behavior, all of this points to one thing — another paradigm shift for the company is due. If 2005 and moving to x86 was the last paradigm shift, then the next one is due in 2014 or 2015. However, just like any Silicon Valley earthquake, you always get a few tremors and smaller quakes before the Big One hits.

The Future

While the iPod was the first little “tremor” that signaled a trend towards becoming more of a consumer electronics company than a computer company, it was the introduction of the iPhone in 2007 was the first “quake” which indicated another massive change was in store for the company.
With the iPhone, Apple ported much of its core BSD-based operating system, Darwin, to the ARM architecture, along with its Objective C development platform from Mac OS X. While it must have seemed logical to many to re-use existing assets in order to facilitate the development of the iPhone on the ARM architecture, what Apple really did was stage their transition/migration plan according to what they would actually be doing with their next generation of desktop and portable computers — Multi-core ARM-based Macs.

Also Read: Do we need to wipe the slate with x86?

Apple’s $278 million purchase of Palo Alto Semiconductor (PA Semi) in 2008 gave the company the final piece of the puzzle they needed to become fully independent of Intel and any other microprocessor vendor, and would allow them to return to the completely closed system which they enjoyed in the 1980s and 1990s.
The first fruit of Apple’s labor with PA Semi will be the generation 1 iPad, which uses specially designed custom ARM Cortex A8-based silicon, the A4 processor.
While the 1Ghz A4 isn’t powerful enough to run a Mac today, I believe that the next logical step is for Apple to continue to evolve the silicon into more and more cores and at higher clock speeds. With iPad 2, we might very well see 2 cores and certainly a higher clock speed.
The next step would be to move to 4 cores and larger amounts of cache, which may present enough computing power to form the basis of the next generation Macbooks or iMacs. It is not implausible that within five years, six and eight-core or even sixteen-core Apple ARM chips could be released. Large amounts of cores with lower power chips are not out of the question, as this is where Intel and AMD are both going, and where Sun was going until it went down the path of acquisition.
Given the fact that there are now more applications for the iPhone/iPad ecosystem than there are for the Mac, and that the App Store software distribution is completely controlled by Apple, it makes perfect sense that Apple would move the Mac to a 100 percent proprietary platform, now that it is seeded by many developers and many applications.
It is also notable that the ARM architecture itself given the amount of shipped chips on cell phones and other devices rivals the x86 desktop ecosystem or possibly will even exceed it in the near future depending on whose figures you look at. Intel itself is already examining this market very closely, particularly with its most recent acquisition of Wind River, which it purchased in June of 2009.
Wind River creates software development and hypervisor stacks for embedded systems architectures, of which TI’s OMAP and the Qualcomm Snapdragon, both ARM chips, are among the most popular used in Smartphones today. Intel also continues to manufacture the ARMADA (formerly Intel XScale) embedded processor for Marvell. Given this heavy trend towards embedded I believe that Intel may follow Apple’s lead and decide to purchase an ARM/embedded asset, such as Marvell, Freescale Semiconductor or possibly even Texas Instruments.
It is not that much of a stretch to imagine a beefed-up iPad with a larger screen, keyboard and mouse, with multiple processor cores and back-end connectivity to Apple’s massive datacenters running Cloud services. You can call this the Macintosh TNG, or the “Cloudintosh”, but I already gave this computer a name.

“The Screen”

While I believe there will be Google/Linux-Screens and even Microsoft “Screens” (as evidenced by developments in Android, Chrome OS, Ubuntu, MeeGo and Windows 7 Phone Series) it actually makes sense for Apple to be the first company to pioneer with “Screen technology.
Effectively, the iPad is the first Screen or the Proto-Screen. The next logical step is to scale up the size of the display to full 1080p with a faster multi-core CPU, more powerful graphics processing with multi-tasking and windowing, with tons of Cloud horsepower to back it up — a synthesis between iPhone OS and Mac OS where the entire means of production, the systems architecture and the software/content delivery mechanism to the device is entirely Apple-controlled.

Will the next generation Mac actually be an evolution of an ARM-based iPad?
  • Yes, all evidence points to a major paradigm shift. (56%)
  • No, Jason is pulling this theory out of his ass. (44%)
Total Votes: 1,352

Indeed, it is entirely possible that everything I have said is pure conjecture, and I could be inferring far too much from Apple’s activities in the past three to five years. When I revisit this subject in 2015, I’m curious as to how close or how far off my predictions were. Talk Back and Let Me Know what you think.

2010年3月15日星期一

13款即食麵含鹽量超標

(明報)2010年3月15日 星期一 12:15
消委會食物安全中心抽查了48款即食麵,發現有13款的含鹽量,超過人體每日可以攝取的上 限。
有關方面根據各產品的營養標籤,將其中三種成分,即鈉、總脂肪、飽和脂肪的含量,與世衛建議的攝入量上限對比。48款即食麵包括袋裝麵、杯麵、撈 麵。
結果發現,其中13款即食麵的鈉(即鹽)含量,超出了每人每日攝入量的上限(2000毫克)。這13款產品包括6款袋裝麵和7款杯麵。其中,鈉含量 最高者每包達到5800毫克。攝取鈉過多,會增加高血壓風險。
另外,有3款產品的總脂肪含量,超過每日攝入量上限的一半。攝取總脂肪過多,會增加心臟病和糖尿風險。
亦有9款產品的飽和脂肪含量,超過每日攝入量上限的一半。攝取飽和脂肪過多,會增加體內的壞膽固醇。
消委會特別指出,市面一些標示「非油炸」的即食麵,雖然脂肪含量是比較低,但鈉含量卻超標,市民應該注意。
消委會指,即食麵中的鈉主要來自調味湯包和醬油。消委會建議市民,吃即食麵時少落調味料,和少飲其味精湯,以減少鈉的攝取量。(即時新聞)

電腦流動寬頻投訴大幅增加

(明報)2010年3月15日 星期一 12:15
消委會指出,去年全年接獲二百多宗有關流動寬頻的投訴,今年頭兩個月已經接獲 超過三百宗有關投訴。
消委會今年頭兩個月接獲347宗有關電腦流動寬頻的投訴,當中114宗涉及公平使用條款。而去年全年接獲267宗有關電腦流動寬頻的投訴,當中只有 2宗涉及公平使用條款。
消委會指出,市面上許多流動上網服務收費計劃,聲稱無限用量,但根據有關的公平使用條款,對於大用量的用戶,可能會減慢速度、另外收費,或暫停服 務。
因此消委會建議消費者在選擇收費計劃時,要問清楚會否有公平使用條款的規定。特別是高用量用戶在簽約前必須要講清楚。
消委會強調,某些供應商在合約中只短短提一句說會受到公平使用條款約束,他促請供應商要向消費者解釋清楚。
(即時新聞)

2010年3月12日星期五

The ultra-light laptop conundrum: Weak processors

http://blogs.zdnet.com/BTL/?p=31788&tag=nl.e539

March 11th, 2010 Posted by Larry Dignan @ 3:01 am

After a few weeks of laptop window shopping I’m discovering a major hang-up—the Intel ultra-low voltage chips that scrimp on horsepower.
This hang-up of mine began to emerge as I was scoping out a Dell Vostro v13. The laptop is light, stylish and would be a nice non-work device to carry around. Enter the hang-up: An Intel ULV processor. Now I know PC buyers are supposed to have evolved past the Ghz line, but I’m a bit old school. Simply put, 1.3 GHz feels too much like a four-cylinder engine to me.

The lighter laptops at Lenovo had more of the same. Even my test drive of a Dell Latitude Z had an Intel Core 2 Duo chip running at 1.4 Ghz. The casing said Ferrari, but the chip said Ford Fiesta.
How widespread is this reticence over ULV chips? I have no idea, but I do know the latest wave of laptops from Dell and HP targeted at SMBs focused on Intel’s more powerful processors, notably the i3, i5 and i7. The processor horsepower matters to me and I’m willing to sacrifice some battery time for it. I’m not willing to sacrifice more than a pound though.
Dell’s Vostro 3300 line along with HP’s ProBook S-Series went with the more powerful Intel chips.

The subtle pitch: Folks want more processing power in a lightweight package. Meanwhile, it’s increasingly clear that I can’t quite get to my laptop nirvana. All I want is everything. Specifically, that laptop will include:
  • Something around 3 pounds.
  • An i3, i5, i7 Intel processor.
  • Lots of battery life.
  • A neat color or aluminum casing.
  • 4GB to 6GB of memory.
  • A price of $1,000 or so.
As far as the drive goes, solid state is nice but I can go traditional to save money. Frankly, the drive size matters the type is an afterthought.
Thus far, compromise is the word of the day. The big question I’m struggling with: Should I compromise on the processor?

Google opens online store for cloud apps

By Sharon Gaudin | Mar 11, 2010
In another move to work its way into the enterprise, Google has unveiled an online store where users can buy cloud-based applications designed to work with Google's own apps.
 
The Google Apps Marketplace goes live tonight with 50 applications available from third-party vendors, said Chris Vander Mey, a senior product manager for Google.
 
He also noted that the company is celebrating a recently hit milestone -- 25 million users and 2 million businesses that now are using Google Apps, like its popular Gmail e-mail service and its Google Calendar application.
 
"What we found as we talked to these customers is that they asked for more apps," said Vander Mey. "They want a specific app for a specific vertical... We want to help them but one of the challenges has been that as you get more apps, there's more hassle. These apps don't naturally work together. They have to share data and they don't do it natively."
 
However, he added that the cloud-based applications being sold in the new marketplace are specifically designed to work with Google's own applications, which should take a lot of the hassle out of the integration.
 
That is a good move for a company that has been trying to move from a consumer-oriented road to a more enterprise course. For months now, Google has been trying to push its applications into the enterprise.
 
Dan Olds, an analyst with The Gabriel Consulting Group, said this new move is a smart one. "This move not only raises the profile of Google apps for business and individual users alike, it also plays on the so-called app mania that has propelled platforms like the iPhone and other devices into prominence," Olds said.
 
"With their own app store, Google provides a store front where developers can display and sell their wares to a large audience. With a lot of developer interest, there's a chance that someone will put together a must-have app that is useful, or fun, enough to capture the imagination of users, which will convert more of them to Google's platform," he said.
 
Vander Mey noted that partners are going to be key to growing Google's business in the enterprise. "These third-party vendors expand our breadth of being able to help enterprises run their businesses in the cloud," he said. "If you need payroll or accounting or image management, you can go to our partners. We will help each other grow a rich ecosystem."
 
Third-party developers that are selling their applications on Google Apps Marketplace include Intuit's online payroll application, Manymoon's project management application, and Mailchimp's e-mail newsletter management application.
 
David Glazer, an engineering director with Google, noted that the company will get 20 percent of the revenue from all sales on the marketplace site.

2010年3月11日星期四

The Rise of Mobile Payments…and the End of Wallets?

The Rise of Mobile Payments…and the End of Wallets?


2009年11月23日 11:13
It’s not often I trek across town for the unveiling of a policy report. Last week, I did, heading over to the Information Technology and Innovation Foundation (ITIF) for an event that was worth the walk. On Tuesday, the ITIF released a new report, “Explaining International Mobile Payments Leadership,” that examines why the U.S. lags behind other nations in establishing a mobile payments system and offers recommendations for how the federal government can speed the arrival and adoption of mobile commerce.
The report, authored by ITIF Senior Analyst Stephen Ezell, explores the global state of mobile payment systems and identifies Japan, South Korea, and Singapore as the world’s leaders. In these countries, mobile phones are used in conjunction with near field communications (NFC) technology to pay for public transit, to check in at airline gates, to make purchases from retails, and, in some cases, to supplement banking and financial institutions. As a result, the mobile phone has evolved into an “electronic wallet,” which the report defines as “a multi-functional device possessing cash, information storage and transaction, identification and authentication, and communication functions.”
After presenting the report, Ezell participated in a panel discussion alongside David Jeppsen, Vice President, NTT DOCOMO USA, and Mark MacCarthy, Adjunct Professor, Georgetown University and Former Senior Vice President for Global Public Policy at Visa Inc. The discussion was moderated by ITIF President Robert Atkinson.
The panelists argued that using cell phones as an electronic wallets will result in increased economic productivity and personal convenience. However, as the reports notes, “widespread deployment and adoption of mobile payments systems requires action from a complex ecosystem of organizations.” This ecosystem include mobile carriers, banks, credit card companies, and others. Because of the complexities involved, only a few nations have succeeded in coordinating the ecosystem required to develop a widely used mobile payments system. For America to realize the convenience and cost savings opportunities provided my mobile payments, Ezell stated, it “needs to develop and adopt a national strategy with government participation.”
The report’s key recommendations suggest that government should:
  1. Create an inter-government mobile payments working group and private-sector advisory council that would collaborate to introduce, by min-2010, a strategy for spurring the deployment of an open, interoperable mobile wallet;
  2. Assume a leadership role in promoting and adopting mobile payments (i.e require that mass transit systems receiving federal funding deploy mobile payment systems, and provide funding for pilot programs); and
  3. Establish clear consumer protections and address consumer privacy concerns.
Though the report states that electronic wallets are “now ready for full-scale implementation and use,” Mark MacCarthy noted that first, “we need incentives for merchants to upgrade and for carriers to embrace their role as payment intermediaries.”
No clear timetable was offered on when Americans can expect a mobile payment system. Stephen Ezell predicted, “two or three years,” while David Jeppsen said, “this technology is being developed for my twelve-year-old…who will be using it when he gets to college.”

2010年3月9日星期二

Windows 7 memory usage: What's the best way to measure?

http://blogs.zdnet.com/Bott/?p=1786&tag=nl.e539
Posted by Ed Bott @ 6:59 pm February 25th, 2010
Windows memory management is rocket science. And don’t believe anyone who tells you otherwise.
Since Windows 7 was released last October I’ve read lots of articles about the right and wrong way to measure and manage the physical memory on your system. Much of it is well-meaning but just wrong.
It doesn’t help that the topic is filled with jargon and technical terminology that you literally need a CS degree to understand. Even worse, web searches turn up mountains of misinformation, some of it on Microsoft’s own web sites. And then there’s the fact that Windows memory management has evolved, radically, over the past decade. Someone who became an expert on measuring memory usage using Windows 2000 might have been able to muddle through with Windows XP, but he would be completely flummoxed by the changes that began in Windows Vista (and its counterpart, Windows Server 2008) and have continued in Windows 7 (and its counterpart, Windows Server 2008 R2).
To help cut through the confusion, I’ve taken a careful look at memory usage on a handful of Windows 7 systems here, with installed RAM ranging from 1 GB to 10 GB. The behavior in all cases is strikingly similar and consistent, although you can get a misleading picture depending on which of three built-in performance monitoring tools you use. What helped me understand exactly what was going on with Windows 7 and RAM was to arrange all three of these tools side by side and then begin watching how each one responded as I increased and decreased the workload on the system.
To see all three memory-monitoring tools at work,
be sure to step through the screen shot gallery I created here:
How to measure Windows 7 memory usage.

Here are the three tools I used:

Task Manager You can open Task Manager by pressing Ctrl+Shift+Esc (or press Ctrl+Alt+Delete, then click Start Task Manager). For someone who learned how to read memory usage in Windows XP, the Performance tab will be familiar, but the data is presented very differently. The most important values to look at are under the Physical Memory heading, where Total tells you how much physical memory is installed (minus any memory in use by the BIOS or devices) and Available tells you how much memory you can immediately use for a new process.

Performance Monitor This is the old-school Windows geek’s favorite tool. (One big advantage it has over the others is that you can save To run it, click start, type perfmon, and press Enter. To use it, you must create a custom layout by adding “counters” that track resource usage over time. The number of available counters, broken into more than 100 separate categories, is enormous; in Windows 7 you can choose from more than 35 counters under the Memory heading alone, measuring things like Transition Pages RePurposed/sec. For this exercise, I configured Perfmon to show Committed Bytes and Available Bytes. The latter is the same as the Available figure in Task Manager. I’ll discuss Committed Bytes in more detail later.

Resource Monitor The easy way to open this tool is by clicking the button at the bottom of the Performance tab in Task Manager. Resource Manager was introduced in Windows Vista, but it has been completely overhauled for Windows 7 and displays an impressive amount of data, drawn from the exact same counters as Perfmon without requiring you to customize anything. The Memory tab shows how your memory is being used, with detailed information for each process and a colorful Physical Memory bar graph to show exactly what’s happening with your memory. I believe this is by far the best tool for understanding at a glance where your memory is being used.

You can go through the entire gallery to see exactly how each tool works. I ran these tests on a local virtual machine, using 1 GB of RAM as a worst-case scenario. If you have more RAM than that, the basic principles will be the same, but you’ll probably see more Available memory under normal usage scenarios. As you’ll see in the gallery, I went from an idle system to one running a dozen or so processes, then added in some intensive file operations, a software installation, and some brand-new processes before shutting everything down and going back to an idle system.
Even on a system with only 1 GB of RAM, I found it difficult to exhaust all physical memory. At one point I had 13 browser tabs open, including one playing a long Flash video clip); at the same time I had opened a 1000-page PDF file in Acrobat Reader and a 30-page graphically intense document in Word 2010, plus Outlook 2010 downloading mail from my Exchange account, a few open Explorer windows, and a handful of background utilities running. And, of course, three memory monitoring tools. Even with that workload, I still had roughly 10% of physical RAM available.
So why do people get confused over memory usage? One of the biggest sources of confusion, in my experience, is the whole concept of virtual memory compared to physical memory. Windows organizes memory, physical and virtual, into pages. Each page is a fixed size (typically 4 KB on a Windows system). To make things more confusing, there’s also a page file (sometimes referred to as a paging file). Many Windows users still think of this as a swap file, a bit of disk storage that is only called into play when you absolutely run out of physical RAM. In modern versions of Windows, that is no longer the case. The most important thing to realize is that physical memory and the page file added together equal the commit limit, which is the total amount of virtual memory that all processes can reserve and commit. You can learn more about virtual memory and page files by reading Mark Russinovich’s excellent article Pushing the Limits of Windows: Virtual Memory.
As I was researching this post, I found a number of articles at Microsoft.com written around the time Windows 2000 and Windows XP were released. Many of them talk about using the Committed Bytes counter in Perfmon to keep an eye on memory usage. (In Windows 7, you can still do that, as I’ve done in the gallery here.) The trouble is, Committed Bytes has only the most casual relationship to actual usage of the physical memory in your PC. As Microsoft developer Brandon Paddock noted in his blog recently, the Committed Bytes counter represents:
The total amount of virtual memory which Windows has promised could be backed by either physical memory or the page file.
An important word there is “could.” Windows establishes a “commit limit” based on your available physical memory and page file size(s).  When a section of virtual memory is marked as “commit” – Windows counts it against that commit limit regardless of whether it’s actually being used
On a typical Windows 7 system, the amount of memory represented by the Committed Bytes counter is often well in excess of the actual installed RAM, but that shouldn’t have an effect on performance. In the scenarios I demonstrate here, with roughly 1 GB of physical RAM available, the Committed Bytes counter never dropped below about 650 MB, even though physical RAM in use was as low as 283 MB at one point. And ironically, on the one occasion when Windows legitimately used almost all available physical RAM, using a little more than 950 MB of the 1023 MB available, the Committed Bytes counter remained at only 832 MB.

So why is watching Committed Bytes important? You want to make sure that the amount of committed bytes never exceeds the commit limit. If that happens regularly, you need either a bigger page file, more physical memory, or both.
Watching the color-coded Physical Memory bar graph on the Memory tab of Resource Monitor is by far the best way to see exactly what Windows 7 is up to at any given time. Here, from left to right, is what you’ll see:

Hardware Reserved (gray) This is physical memory that is set aside by the BIOS and other hardware drivers (especially graphics adapters). This memory cannot be used for processes or system functions.
In Use (green) The memory shown here is in active use by the Windows kernel, by running processes, or by device drivers. This is the number that matters above all others. If you consistently find this green bar filling the entire length of the graph, you’re trying to push your physical RAM beyond its capacity.
Modified (orange) This represents pages of memory that can be used by other programs but would have to be written to the page file before they can be reused.
Standby (blue) Windows 7 tries as hard as it can to keep this cache of memory as full as possible. In XP and earlier, the Standby list was basically a dumb first-in, first-out cache. Beginning with Windows Vista and continuing with Windows 7, the memory manager is much smarter about the Standby list, prioritizing every page on a scale of 0 to 7 and reusing low-priority pages ahead of high-priority ones. (Another Russinovich article, Inside the Windows Vista Kernel: Part 2, explains this well. Look for the “Memory Priorities” section.) If you start a new process that needs memory, the lowest-priority pages on this list are discarded and made available to the new process.
Free (light blue) As you’ll see if you step through the entire gallery, Windows tries its very best to avoid leaving any memory at all free. If you find yourself with a big enough chunk of memory here, you can bet that Windows will do its best to fill itby copying data from the disk and adding the new pages to the Standby list, based primarily on its SuperFetch measurements. As Russinovich notes, this is done at a rate of a few pages per second with Very Low priority I/Os, so it shouldn’t interfere with performance.
In short, Windows 7 (unlike XP and earlier Windows versions) goes by the philosophy that empty RAM is wasted RAM and tries to keep it as full as possible, without impacting performance.
Questions? Comments? Leave them in the Talkback section and I’ll answer them in a follow-up post or two.

iPad: Perfectly flawed

Posted by Adrian Kingsley-Hughes @ 5:09 am March 8th, 2010

http://blogs.zdnet.com/hardware/?p=7590&tag=nl.e539
This Friday Apple begins taking pre-orders for the new iPad, which will be available April 3rd. While I really like the device, I’m very aware of the fact that the device is flawed … perfectly flawed.
Nowadays $500 buys you a lot of hardware, and since I’m not obsessed by having a particular logo on my hardware, I try to make rational decisions when it comes to spending my cash.
I like the iPad, a lot. I like the screen, I like the form-factor, heck, I even like the broad base of apps already available for it from the App Store. But there are aspects of the device that I don’t like, and which I find really hard to overlook.
  • DRM, DRM, DRM
    Just because Apple’s given up on DRM for music, don’t think for one moment that it’s given up on DRM. Expect audio books, movies and other stuff to be locked away nice and tight.
  • The lock-in
    Basically, the device is one big lock into the Apple ecosystem. Sure, there’ll be jailbreaks I’m sure, but that puts my device in the middle of a tug-of-war between Apple and the jailbreakers.
  • No Flash support
    I hate Flash, but web minus Flash is a pretty poor web experience.
  • No removable storage
    It would be really cool to be able to store files and on a removable media, such as an SD Card, and swap that data between other iPads and other devices. It would be a good way to bring photos from digital cameras onto the iPad without having to have a PC or Mac as a go-between. Alas, this is not possible.
  • No USB support
    I know that Apple likes to have an iron grip over its hardware, and that it likes the revenue stream it gets from licensing the dock connector to third-parties, but I’d really like a USB port on the iPad because it would offer interoperability between my existing hardware and the iPad.
  • Built-in battery
    Yes, I still hate the built-in battery.
I think I’ll be holding onto my money for a little while … maybe anothe rvendor will come out with a tablet that offers most of the upsides but without so many downsides.

2010年3月7日星期日

藉Facebook資料 創辦人侵記者電郵

(明報)2010年3月7日 星期日 05:10
http://hk.news.yahoo.com/article/100306/4/gvum.html
【明報專訊】全球最大社交網站Facebook創辦人朱克伯格(Mark Zuckerberg),被指於2004年兩度當上黑客,包括利用自己創辦的Facebook作工具,入侵競爭對手及記者的電郵戶口,令人關注這名少年億萬富豪是否為了自己利益而不擇手段。Facebook回應時沒有直接承認或否認有關報道,只說報道「旨在令朱克伯格尷尬」。
BusinessInsider.com在網上發表一份有關Facebook創立過程、為期兩年的調查報道,指3名哈佛    學生溫克沃斯(Cameron Winklevoss)、泰勒(Tyler Winklevoss)及納倫德拉(Divya Narendra),在2003年請朱克伯格協助完成社交網絡HarvardConnection.com(2004年改稱ConnectU)的程式碼,但朱克伯格拖延多月,其間私下創建哈佛大學學生網絡thefacebook.com,並於2004年2月正式創立Facebook,搶佔社交網站頭啖湯。
憂不利報道損聲譽偷看電郵內容
溫克沃斯3人於是向哈佛學生報《The Crimson》投訴朱克伯格「偷橋」。朱克伯格接到《The Crimson》訪問邀請後,恐怕該報會損害他聲譽,於是偷取學生報記者在Facebook.com的登入資料,企圖進入他們的哈佛大學電郵戶口,結果成功「撞中」兩次,閱讀部分記者的哈佛電郵,得知學生報的報道方向。溫克沃斯等3人2008年控告朱克伯格詐騙、侵權、盜竊商業秘密及「偷橋」,把社交網站構思據為己有,但雙方已庭外和解,朱克伯格同意向溫克沃斯3人賠償6500萬美元    (約5億港元)。
攻擊對手網站竄改用戶資料
另一次也發生在2004年,當時Facebook已漸漸站穩陣腳,但朱克伯格深怕競爭,於是利用黑客程式進入ConnectU的系統,更改部分ConnectU用家的資料,並替用家填寫虛假資料,包括ConnectU創辦人溫克沃斯的資料,甚至建立一個有關溫克沃斯的假戶口;他又被指把部分ConnectU用家的狀態改為「無法看見」,讓其他人難以在網絡中找到他們;朱克伯格更把最少20個ConnectU戶口完全關閉。報道沒有指朱克伯格如何入侵這些戶口。
Facebook拒絕評論有關報道,發言人說﹕「我們不會評論令人不快的訴訟案及匿名報道,這些報道旨在改寫Facebook的早期歷史,或令朱克伯格尷尬。無可爭議的事實是,朱克伯格約6年前離開哈佛往矽谷    發展後,帶領Facebook由大學網站,變成為全球逾4億人生活重要一部分的全球化服務。」
每日郵報

2010年3月6日星期六

网络大盗将黑手伸向证券账户 股民资金不翼而飞

2010年02月22日 13:34

http://stock.jrj.com.cn/2010/02/2213346989479.shtml
网络菜园里的白菜被偷了,可能影响你的情绪,或许会想办法再种上。如果你证券账户被人盯上了,恐怕就有大麻烦了。

  上海股民杨先生就遭遇到了这种事。当他在2009年2月18日查询资金账户时发现,账上的数十万元资产已不翼而飞,只剩下了几百块!

  “通过向投资者计算机种植木马病毒,伺机窃取投资者账户密码,然后操控投资者账户,与自己控制的账户之间反复对敲交易某些证券,赚取价差,把投 资者账户的资产,蚂蚁搬家式地倒腾到自己控制的账户”,刚刚侦破了“熊冬冬”团伙盗窃证券账户案件的荆州市公安局网侦支队政委李恩忠对本报记者说。

  荆州警方提供的数据显示,目前已经查实,“熊冬 冬”团伙控制的账户分布于湖南、湖北两省的多个地区,交易证券金额高达2300余万元,从中获利105万余元。

  此案的最新进展是,2009年12月7日荆州市检察院已经送达荆州市中级人民法院提起公诉,建议对主犯判决刑期为无期徒刑。

  结网捉贼

  2009年4月27日上午10时52分,上海证券交易所实时监控系统警报急促响起:某证券交易价格一分钟内涨幅达9.6%,成交价格异动明显。

  “监控人员随即查看交易纪录,发现华泰证券荆州某营业部交易的‘方鬃’账户低买高卖获利,其交易对手方是国泰君安证券郑州花园路营业部交易的 ‘孟鬃’账户高买低卖发生亏损,初步判断,交易行为疑似同一人控制,作案分子可能就是我们跟踪多时的涉嫌盗窃他人账户人员。”上交所市场监察部人士对本报 记者说。

  按照证券执法协作机制,上交所立即采取行动。

  一方面,立即向中国证监会报告相关情况,在证监会的统一协调下,向作案地所在的湖北证监局、当地公安通报刚刚发现的新线索并建议其立即采取行 动。另一方面,即刻通知被盗方账户交易所在的证券营业部,要求其立即联系客户,确认账户被盗用情况,建议其采取有效措施防止被盗用事态进一步扩大,并及时 向警方报告情况。同时,上交所还通知涉案账户交易指定证券营业部,密切跟踪账户交易动向,着手准备客户开户资料、交易IP、资金账户资料等相关材料。

  “接到上交所和稽查局的电话,局领导非常重视,迅速反应,立即作出全面部署。”湖北证监局相关人士对本报记者说。

  一是立即将此新情况通报给了湖北省公安厅,并请求迅速侦查,力争尽快破案;二是迅速向华泰证券荆州该营业部有关负责人了解相关情况,并明确要求 对涉嫌盗用账户进行实时监控;三是协调该营业部及时向荆州警方反映情况,积极配合警方行动。

  承办此案的湖北荆州市公安局网监支队,经过缜密调查,发现“方鬃”账户网上登录地址是荆州市荆州区“天佳”网吧。

  时钟的指针指向上午11:35,一张大网已经织成。

  “等我们赶到网吧,嫌疑人已经溜了!”负责此次行动的“总策划”——荆州市公安局网监支队政委李恩忠对本报记者说:“我们继续监控‘方鬃’账 户,发现其在下午2:45又在沙市区‘九州网城’网吧上线十分钟,遗憾的是,等我们赶到,又下线了!”

  当天行动无果,荆州网监支队连夜召开紧急会议。

  “会上我提出,不能只盯着证券交易时的IP,因为证券交易部门提供的IP,有延时情况,定位后赶到现场为时已晚,重要的是盯住他的资金转移。” 李恩忠说:“随即,我们联系了‘方鬃’账户资金存管的当地工商银行(601398)(4.90,-0.01,-0.20%), 对资金账号进行了监控”。

  当晚,警方兵分四路,分别监控工商银行账户异 动、作案账户交易所在营业部、荆州区和沙市区相关网吧。

  4月28日上午8时48分,银行侦查 组发现目标——嫌疑人正在某ATM机取款。

  结果毫无悬念,嫌疑人熊冬冬在取款现场被抓获。

  6月2日,案件另外犯罪嫌疑人李伟、许敏被警方抓获。

  2300万:105万的收益

  根据熊冬冬的交代,他们是一伙人在做这个事,而他只是负责取钱,另外四名同伙分别是:李伟、许敏、管杰、陈建华(后两者目前在逃),其中李伟是 主谋。

  “他们分工比较明确,有在网上种植木马病毒的,有操作证券账户买卖的,还有专门负责取钱的”,李恩忠介绍说:“这些人之间都以见面方式进行联 系,联络方式很原始,从不打电话,连公用电话都不用。”

  李伟把这种方法叫做“拉登模式”,意指拉登被美军跟踪卫星电话信号轰炸过之后就再也不用任何通讯设备,从而使自己在信息时代处于最原始的安全境 地。

  在此思路下,几人虽然在网络进行盗窃,但从不使用QQ和其他网络即时通讯工具,不玩网络游戏,每次上网时间极短,“很难找到行踪”。

  具备如此“高智商”的反侦查能力,令人惊叹!然而,本报记者从警方提供的资料看到,目前已经被逮捕的李伟、熊冬冬、许敏三人均是湖北省石首市 人,二十岁左右,初中文化程度,无业。

  “李伟是主谋,别人不知道怎么搞(用种植木马盗窃账户密码),是他提出来的,之前也因犯盗窃罪被深圳市南山区人民法院判处有期徒刑两年,刚刚刑 满释放,出来就又开始干了。”李恩忠说。

  据李伟向警方交待,他们选择账户有几个标准:一是不经常使用,以避免在短时间内被发现;二是账户上的钱相对较多,至少要有几十万,这样才方便买 卖证券并产生价差以获利。

  交易记录显示,在2009年2月16日至18日三个交易日中,“赖鬃”账户按此交易方式循环操作,与盗用账户发生11个品种的交易,累计买入 606手,卖出606手,交易金额680586.9元,获利50610元(未计算交易成本)。

  而来自荆州警方的数据显示,几个月时间内,熊冬冬团伙用盗取的账号共交易21198手,买卖金额高达23730411元,从中获利 1057613.4元。

  7月31日,荆州市公安局将此案移送荆州市人民检察院审查起诉。12月7日,荆州市检察院向荆州市中级人民法院正式提起诉讼。

  定性盗窃,或刑至无期

  “我们是以盗窃罪的罪名来起诉的。”荆州市检察院起诉处李涛处长对本报记者说,鉴于此案数额巨大,直接向荆州市中级人民法院提起诉讼。

  实际上,在调查过程中,荆州警方曾提出两项罪名:输入计算机病毒罪、盗窃罪,但最终移送审查起诉时只保留了盗窃罪一个罪名。

  “嫌疑人的目的是盗窃别人密码,最终盗窃资金,不是为了输入计算机病毒,输入病毒是达成最终目的的手段行为,同时也符合秘密窃取他人财务的构成 要件。”李涛解释说。

  至于刑期,由于盗窃数额巨大,检察院建议应处无期徒刑,是按照相关法规规定的最重情节的处理。

  在熊冬冬团伙盈利的105余万元中,取现共计40多万,已全部挥霍,其余资金被控制,属于盗窃既遂。

  对于盗窃数额是否达到“特别巨大”的标准,由于各省区经济发展不平衡,故认定范围也不一致。如,在高法的相关司法解释中,中部地区划定的基本范 围是十五万到三十万,湖北地区取了中间值,以十八万为界限。显然,此案应属盗窃数额特别巨大。

  “比如第一被告李伟,是犯意的提出者,在案件中负主要责任,同时在深圳因同一罪名盗窃罪被判过刑,五年之内重新犯罪的是累犯,应从重处罚”,李 涛说:“当然,此案有点特殊,普通盗窃最高刑是无期,从重也只能从到无期。”

  对熊冬冬和许敏,检察院建议处十年以上有期徒刑至无期徒刑。

  值得注意的是,此案在检察院审查阶段曾两次退回公安补充侦查,原因是盗窃金额的计算方法不一致,以致盗窃数额不同。

  目前盗窃数额有三种计算方法。

  一是最简单直接的方法,即以被盗窃人账户的原有数额,直接减掉余下数额,即算作盗窃金额,不管中间差价和流失。

  “这种计算方式比较简单,但弊端是,实际被告人占有的不一定是这个数额,(采用这种计算方式)对保护被告人的合法权益做的不够,对被告人是十分 不利的”,李涛说。

  公安第一次移送审查时即以此种方法计算,后经充分沟通后换了第二种计算方法,即嫌疑人账户上盗窃后的钱和原始的钱之差,为盗窃数额。

  还有一种计算方式是扣减交易过程中的税收等流失,相对复杂,采用的不是很多。

  12月7日,荆州市检察院将此案送达荆州市中级人民法院进行起诉,法院已经受理,并将择日公开庭审。