GIẢI ĐỀ READING CAM 18 TEST 2 PASSAGE 2

Highlight

🎓 IELTS Reading Practice Test

Living with artificial intelligence (Cambridge 18 – Test 2 – Passage 2)

⏱️ 20:00
Hướng Dẫn Làm Bài Chi Tiết

🎯 Q14-19: Multiple Choice

  • Đọc phần câu hỏi, không đọc vội đáp án A, B, C, D.
  • Xác định từ khóa và định vị đoạn văn được nhắc đến (vd: First paragraph).
  • Đối chiếu ý của bài với các lựa chọn để tìm đáp án sát nghĩa nhất.

🔗 Q20-23: YES / NO / NOT GIVEN

  • YES: Nội dung câu hỏi hoàn toàn khớp với tuyên bố của tác giả.
  • NO: Nội dung câu hỏi đi ngược lại với bài.
  • NOT GIVEN: Bài đọc không đề cập đủ thông tin để kết luận.

📝 Q24-26: Summary Completion

  • Quét nhanh các lựa chọn A-F (chú ý từ loại).
  • Tìm đoạn văn trong bài nói về “UK health system” (NHS).
  • Điền đúng chữ cái A, B, C, D, E hoặc F vào ô trống.

📌 Chú Thích Màu Sắc:

🔑 Từ khóa gợi ý ✨ Vị trí đáp án 🖍️ Highlight của bạn

Living with artificial intelligence

Powerful artificial intelligence (AI) needs to be reliably aligned with human values, but does this mean AI will eventually have to police those values?

This has been the decade of AI, with one astonishing feat after another. A chess-playing AI that can defeat not only all human chess players, but also all previous human-programmed chess machines, after learning the game in just four hours? That’s yesterday’s news, what’s next? True, these prodigious accomplishments are all in so-called narrow AI, where machines perform highly specialised tasks. But many experts believe this restriction is very temporary. By mid-century, we may have artificial general intelligence (AGI) (Q14) – machines that can achieve human-level performance on the full range of tasks that we ourselves can tackle.

If so, there’s little reason to think it will stop there. Machines will be free of many of the physical constraints on human intelligence (Q15). Our brains run at slow biochemical processing speeds on the power of a light bulb, and their size is restricted by the dimensions of the human birth canal. It is remarkable what they accomplish, given these handicaps. But they may be as far from the physical limits of thought as our eyes are from the incredibly powerful Webb Space Telescope.

Once machines are better than us at designing even smarter machines, progress towards these limits could accelerate. What would this mean for us? Could we ensure a safe and worthwhile coexistence with such machines? On the plus side, AI is already useful and profitable for many things, and super AI might be expected to be super useful, and super profitable. But the more powerful AI becomes, the more important it will be to specify its goals with great care. Folklore is full of tales of people who ask for the wrong thing, with disastrous consequences – King Midas (Q16), for example, might have wished that everything he touched turned to gold, but didn’t really intend this to apply to his breakfast.

So we need to create powerful AI machines that are ‘human-friendly’ – that have goals reliably aligned with our own values. One thing that makes this task difficult is that we are far from reliably human-friendly ourselves. We do many terrible things to each other and to many other creatures with whom we share the planet. If superintelligent machines don’t do a lot better than us, we’ll be in deep trouble. (Q17) We’ll have powerful new intelligence amplifying the dark sides of our own fallible natures.

For safety’s sake, then, we want the machines to be ethically as well as cognitively superhuman. We want them to aim for the moral high ground, not for the troughs in which many of us spend some of our time. Luckily they’ll be smart enough for the job. If there are routes to the moral high ground, they’ll be better than us at finding them (Q18), and steering us in the right direction.

However, there are two big problems with this utopian vision. One is how we get the machines started on the journey, the other is what it would mean to reach this destination. The ‘getting started‘ problem is that we need to tell the machines what they’re looking for with sufficient clarity that we can be confident they will find it – whatever ‘it’ actually turns out to be. This won’t be easy, given that we are tribal creatures and conflicted about the ideals ourselves. We often ignore the suffering of strangers (Q19), and even contribute to it, at least indirectly. How then, do we point machines in the direction of something better?

As for the ‘destination’ problem, we might, by putting ourselves in the hands of these moral guides and gatekeepers, be sacrificing our own autonomy – an important part of what makes us human. Machines who are better than us at sticking to the moral high ground may be expected to discourage some of the lapses we presently take for granted. We might lose our freedom to discriminate in favour of our own communities (Q20), for example.

Loss of freedom to behave badly isn’t always a bad thing, of course: denying ourselves the freedom to put children to work in factories, or to smoke in restaurants are signs of progress. But are we ready for ethical silicon police (Q21) limiting our options? They might be so good at doing it that we won’t notice them; but few of us are likely to welcome such a future. (Q22)

These issues might seem far-fetched, but they are to some extent already here. AI already has some input into how resources are used (Q24) in our National Health Service (NHS) here in the UK, for example. If it was given a greater role, it might do so much more efficiently than humans can manage, and act in the interests of taxpayers and those who use the health system. However, we’d be depriving some humans (e.g. senior doctors) (Q25) of the control (Q26) they presently enjoy. Since we’d want to ensure that people are treated equally and that policies are fair, the goals of AI would need to be specified correctly.

We have a new powerful technology to deal with – itself, literally, a new way of thinking. For our own safety, we need to point these new thinkers in the right direction, and get them to act well for us. It is not yet clear whether this is possible, but if it is, it will require a cooperative spirit, and a willingness to set aside self-interest. (Q23)

Both general intelligence and moral reasoning are often thought to be uniquely human capacities. But safety seems to require that we think of them as a package: if we are to give general intelligence to machines, we’ll need to give them moral authority, too. And where exactly would that leave human beings? All the more reason to think about the destination now, and to be careful about what we wish for.

Questions 14–19
Choose the correct letter, A, B, C or D.
14. What point does the writer make about AI in the first paragraph?
✅ Đáp án: C
Đoạn 2: “But many experts believe this restriction is very temporary. By mid-century, we may have artificial general intelligence (AGI)…” (Nhiều chuyên gia tin rằng hạn chế này [AI hẹp] chỉ là tạm thời… chúng ta có thể có AGI ngang tầm con người).
15. What is the writer doing in the second paragraph?
✅ Đáp án: A
Đoạn 3 so sánh não con người (bị giới hạn kích thước, chạy chậm) với máy móc (“free of many of the physical constraints”). Tác giả giải thích lý do máy móc có thể dễ dàng vượt qua con người về mặt hiệu suất.
16. Why does the writer mention the story of King Midas?
✅ Đáp án: B
Đoạn 4: Câu chuyện King Midas được đưa ra để minh họa việc “ask for the wrong thing, with disastrous consequences” (yêu cầu sai điều, dẫn đến thảm họa), tương tự như việc ta không cẩn thận khi đặt mục tiêu (“objectives”) cho AI.
17. What challenge does the writer refer to in the fourth paragraph?
✅ Đáp án: D
Đoạn 5: “One thing that makes this task difficult is that we are far from reliably human-friendly ourselves… If superintelligent machines don’t do a lot better than us, we’ll be in deep trouble.” (Bản thân con người còn không thân thiện… AI cần phải làm tốt hơn chúng ta).
18. What does the writer suggest about the future of AI in the fifth paragraph?
✅ Đáp án: C
Đoạn 6: “they’ll be better than us at finding them [routes to the moral high ground]”. Tác giả gợi ý rằng máy móc sẽ vượt trội hơn con người về mặt tìm kiếm hướng đi đạo đức.
19. Which of the following best summarises the writer’s argument in the sixth paragraph?
✅ Đáp án: D
Đoạn 7: “The ‘getting started’ problem is… This won’t be easy, given that we are tribal creatures and conflicted… ignore the suffering of strangers”. Những khiếm khuyết này của con người làm khó việc chỉ dẫn máy móc đi đúng hướng.
Questions 20–23
Do the following statements agree with the claims of the writer in Reading Passage 2?
Select YES, NO or NOT GIVEN.
20. Machines with the ability to make moral decisions may prevent us from promoting the interests of our communities.
✅ Đáp án: YES
Đoạn 8: “We might lose our freedom to discriminate in favour of our own communities, for example.” (Chúng ta có thể mất quyền tự do thiên vị cộng đồng của chính mình).
21. Silicon police would need to exist in large numbers in order to be effective.
✅ Đáp án: NOT GIVEN
Đoạn 9 có nhắc đến “ethical silicon police” nhưng không hề đề cập đến số lượng (large numbers) cần thiết để chúng hoạt động hiệu quả.
22. Many people are comfortable with the prospect of their independence being restricted by machines.
✅ Đáp án: NO
Đoạn 9: “…but few of us are likely to welcome such a future.” (Rất ít người trong chúng ta có khả năng hoan nghênh một tương lai như vậy). Trái ngược với câu hỏi (Many people are comfortable).
23. If we want to ensure that machines act in our best interests, we all need to work together.
✅ Đáp án: YES
Đoạn 11: “…it will require a cooperative spirit, and a willingness to set aside self-interest.” (Đòi hỏi tinh thần hợp tác và sẵn sàng gạt bỏ lợi ích cá nhân sang một bên).
Questions 24–26
Complete the summary using the list of phrases, A–F, below.
Write the correct letter, A–F, in boxes 24–26.
A medical practitioners
B specialised tasks
C available resources
D reduced illness
E professional authority
F technology experts

Using AI in the UK health system

AI currently has a limited role in the way 24. are allocated in the health service. The positive aspect of AI having a bigger role is that it would be more efficient and lead to patient benefits. However, such a change would result, for example, in certain 25. not having their current level of 26. . It is therefore important that AI goals are appropriate so that discriminatory practices could be avoided.
✅ Q24. Đáp án: C (available resources)
Đoạn 10: “AI already has some input into how resources are used in our National Health Service (NHS)…”
✅ Q25. Đáp án: A (medical practitioners)
Đoạn 10: “…depriving some humans (e.g. senior doctors)…” -> Senior doctors tương ứng với Medical practitioners.
✅ Q26. Đáp án: E (professional authority)
Đoạn 10: “…of the control they presently enjoy.” -> Control (kiểm soát/quyền lực) tương ứng với Professional authority.

💡 Chiến Thuật Làm Bài IELTS Reading

📖 Cấu trúc làm bài nhanh:

  • Sử dụng chức năng bôi đen văn bản bằng chuột để highlight nháp khi tìm keyword.
  • MCQ (14-19): Đáp án luôn theo thứ tự bài đọc. Đọc 1 đoạn, trả lời 1 câu, tránh phải đọc lại từ đầu.
  • Y/N/NG (20-23): Từ khóa mấu chốt nằm ở phần sau (nửa cuối) của bài đọc. Cẩn thận phân biệt giữa “NO” (ngược lại) và “NOT GIVEN” (thiếu thông tin kết luận).
  • Summary (24-26): Xác định đoạn chứa cụm “UK health system” (NHS). Thường nó chỉ tập trung trong 1 đoạn duy nhất (Đoạn 10). Phân tích loại từ (Noun, Verb…) để ghép với hộp A-F.

Leave a Reply