Figure 1. One-year survival probabilities for first cadaveric and living donor allografts and their recipients, adjusted for age, sex, race, and primary diagnosis. Despite impressive increases in cadaveric allograft survival, living donor allograft survival is consistently superior. Source: US Renal Data System 2002 Annual Data Report.
Figure 2. Annual incidences of early acute rejection, late acute rejection, and delayed graft function. Note that although rejection rates have fallen dramatically, rates of delayed graft function remain unchanged. The latter reflects nonimmunological variables such as ischemia times and use of suboptimal cadaveric donors. Adapted with permission from Gjertson.5
Figure 3. The solid line shows steady increase in the incident number of patients with end-stage renal disease (ESRD) in the United States in the late 1990s. Although the absolute number of transplantations per year has increased somewhat, this has meant that the rate of transplantations per 100 patients receiving dialysis days has fallen (dashed line). The gray line shows the rate of transplantations in black Americans alone per 100 dialysis patient-days. Source: US Renal Data System 2002 Annual Data Report.
Figure 4. A, Causes of allograft loss. B, Causes of death from postransplantation years 1 through 5. Adapted with permission from Cecka.74
Magee CC, Pascual M. Update in Renal Transplantation. Arch Intern Med. 2004;164(13):1373-1388. doi:10.1001/archinte.164.13.1373
Renal transplantation is the treatment of choice for most patients with end-stage renal disease. The shortage of donor organs, however, remains a major obstacle to successful, early transplantation. This shortage has actually worsened despite an increase in living family-related and unrelated donors. On the other hand, over the last 10 years, allograft and recipient survival have significantly improved. This encouraging outcome reflects many factors, particularly a favorable shift in the balance between the efficacy and toxicity of immunosuppressive regimens. As acute rejection and early graft loss have become less common, the focus is increasingly directed toward the prevention and treatment of the long-term complications of renal transplantation. These include suboptimal allograft function, premature death, cardiovascular disease, and bone disease. Thus, a multidisciplinary approach—rather than management of immunological issues alone—is now required to optimize long-term outcomes of renal transplant recipients.
Successful renal transplantation allows freedom from the lifestyle restrictions and complications associated with dialysis and is therefore associated with better quality of life.1 One study of United States Renal Data System (USRDS) data compared outcomes in patients on the transplant waiting list (ie, who were continuing to receive dialysis) with those of controls who had received a kidney transplant. It found that, after 3 to 4 years of follow-up, transplantation reduced the risk of death overall by 68%.2 Transplantation conferred a survival benefit in almost all subgroups, including in elderly or obese patients or those with hepatitis C. In addition, over the long term, it is more cost-efficient than dialysis.3 Thus, transplantation remains the optimal therapy for patients with end-stage renal disease (ESRD).
Short-term outcomes such as allograft survival, patient survival, and rates of acute rejection in the first 12 months after transplantation are excellent and continue to improve. For recipients of a first cadaveric kidney in the United States, current 1-year patient and graft survival probabilities are about 95% and 88%, respectively. For recipients of a first living donor kidney, current 1-year patient and graft survival probabilities are 98% and 94%, respectively4 (Figure 1). Registry data indicate that rates of acute rejection in the first 6 months have decreased to less than 20%5 (Figure 2). Similar improvements have been reported from other countries. These impressive results reflect incremental improvements in crossmatching tests (pretransplantation in vitro assays to detect donor antibodies to recipient HLA antigens), immunosuppressive regimens, antimicrobial prophylaxis, and overall surgical and medical care. For example, more effective anti-cytomegalovirus (CMV) prophylaxis with ganciclovir or valganciclovir has reduced—but not eliminated—the morbidity and mortality related to CMV disease.6 This reduction, in turn, has allowed the relatively safe use of more intensive immunosuppressive protocols, and thus lower rates of acute rejection.
Although static in the 1980s, long-term renal allograft survival has slowly but steadily increased in the last decade. For example, estimated cadaveric graft half-lives were 7.9 years for the 1988-1989 (2-year) cohort, 9.2 years for the 1994-1995 cohort, and 11.6 years for the 1998-1999 cohort.7 Of note, this improvement was concurrent with greater use of organs from older and less optimal deceased donors. The half-lives of living donor grafts have also improved, even though donor age has increased and the degree of matching for HLA antigens decreased.7 The latter change reflects the major increase in living unrelated donors such as spouses. Estimated living donor graft half-lives were 12.5 years for the 1988-1989 cohort, 15.8 years for the 1994-1995 cohort, and 19.3 years for the 1998-1999 cohort.7
Because the risk of acute rejection is greatest in the early posttransplantation period, more intensive immunosuppression is given at that time and progressively decreased in the following weeks and months. As long as the allograft is viable, some immunosuppresion is required. The degree of maintenance immunosuppression required is now a matter of debate; standard immunosuppression is associated with many adverse effects including nephrotoxicity, increased risk of infection, and cancer. Furthermore, it does not allow the development of allograft tolerance.8 Single-center results with very-low-dose maintenance immunosuppression are encouraging9 but larger, randomized studies are required before such strategies can be widely recommended.
The main target of immunosuppressive drugs remains the CD4+ T cell because of its critical role in orchestrating the rejection response.10 Most immunosuppressive regimens incorporate glucocorticoids, a calcineurin inhibitor (CNI), and an antiproliferative agent (Table 1). The rationale for such "triple therapy" is that adequate immunosuppression can be achieved without a need for toxic doses of any one agent.
In the immediate posttransplantation period, induction therapy with lymphocyte-depleting antibodies or anti-interleukin 2 (anti–IL-2) receptor monoclonal antibodies is sometimes also used to provide additional immunosuppression. Induction therapy also allows delayed introduction of CNIs, the early use of which might exacerbate delayed graft function (usually defined as initial failure of the allograft to function, with a need for dialysis within the first week after transplantation). The long-term benefits of induction therapy are controversial, as registry data suggest little effect on overall graft survival.11 Since immediately after transplantation most activated T cells should be reactive only against donor antigens, and only activated T cells should express the full IL-2 receptor, IL-2 receptor blockers potentially provide specific and safe immunosuppression.
Glucocorticoids remain a cornerstone of immunosuppression in most patients. Dosage is progressively decreased in the first 3 to 6 months after transplantation, eg, to 5 to 10 mg/d of prednisone. The adverse effects of steroids are well known; of particular importance in transplant recipients are hyperlipidemia, hypertension, glucose intolerance, and osteoporosis. Unfortunately, complete withdrawal of steroids has traditionally been associated with rejection, and with both short- and long-term graft dysfunction in a subset of recipients.12 However, there is optimism that newer immunosuppressants such as mycophenolate mofetil (MMF) and tacrolimus will allow the safe use of much lower doses of steroids or their avoidance altogether. Certainly, nonsteroid or low-dose protocols should now be considered in patients at risk for significant toxicity (eg, those with pretransplantation osteoporosis), and steroid withdrawal should be considered in patients who have developed significant posttransplantation toxicity (eg, new onset of diabetes mellitus). Recipients treated without steroids must be followed up closely for possible rejection.
These drugs inhibit calcineurin, a pivotal enzyme in T-cell–receptor signaling, and thereby reduce the synthesis of several critical T-cell growth factors, including IL-2.10 Trough blood concentrations are used to guide dosing but recent studies have shown that lower C2 concentrations (ie, blood concentrations 2 hours after ingestion) of cyclosporine correlate better with the risk of acute rejection.13,14 There are no published studies to date showing better long-term outcomes with C2 monitoring of cyclosporine metabolism.
Tacrolimus is more effective than cyclosporine in preventing acute rejection at doses currently used, and there is now some evidence that medium-term outcomes are better with tacrolimus.15 For these reasons and because its adverse effects profile is perceived to be better, tacrolimus is becoming the CNI of choice in kidney transplant recipients in many centers in the United States.16 Both drugs are metabolized by the intestinal and hepatic cytochrome P450 system. Inducers or inhibitors of this system should be prescribed with caution and more frequent monitoring of cyclosporine and tacrolimus concentrations should be performed. Although both drugs can cause acute nephrotoxicity, their role in causing significant chronic allograft dysfunction is unclear.17
Antiproliferative drugs include azathioprine, MMF, and sirolimus. These drugs function principally by inhibiting mitosis and thus proliferation of lymphocytes. The antiproliferative effects are not lymphocyte specific, however, and bone marrow suppression is the most common adverse effect. Azathioprine has been used in clinical transplantation for over 40 years but MMF is a more powerful immunosuppressant associated with better short-term—and probably better long-term—outcomes.18,19 Thus, in the last 5 years, MMF has replaced azathioprine for patients with new transplants in many centers. The combination of MMF and tacrolimus is very effective in preventing acute rejection but is associated with a high incidence of adverse gastrointestinal effects such as nausea, abdominal discomfort, and diarrhea. This probably reflects both intrinsic effects of tacrolimus and the higher MMF plasma concentrations obtained when the drug is prescribed with tacrolimus rather than cyclosporine.20
Sirolimus (rapamycin) is the most recently licensed immunosuppressive agent in renal transplantation. It functions by inhibiting "signal 3" in T-cell activation—by blocking the downstream effects of IL-2 and other growth factors on initiation of the cell cycle and, ultimately, inhibiting T-cell proliferation.21 Sirolimus is also metabolized via the cytochrome P450 system. In a randomized controlled trial, sirolimus was associated with a lower incidence of acute rejection but, when combined with cyclosporine and steroids, with more adverse effects than azathioprine.22 Two properties of sirolimus may benefit transplant recipients. First, its antiproliferative effects could prevent graft atherosclerosis (a beneficial effect analogous to that of sirolimus-coated stents in coronary artery disease); second, its antineoplastic effects could reduce the high incidence of posttransplantation tumors. Data on long-term outcomes with sirolimus are not yet available, however. Therefore, its role in the prevention of these important posttransplantation complications and in routine maintenance therapy remains to be defined.
Most drugs under investigation continue to target T cells because of their central role in allograft rejection, but there is growing interest in anti–B-cell and antiplasma therapies. Experimental agents include those modulating cell surface molecule interactions, T-cell receptor signaling (such as inhibitors of Janus kinase [Jak]), or lymphocyte trafficking.23 The search for safe and effective tolerance-inducing regimens in human transplantation continues. Tolerance is best defined as the absence of destructive antigraft immune responses in the presence of an otherwise competent immune system.24 Tolerance to self-antigens is the norm in humans and manipulation of its physiological mechanisms is a focus of ongoing research.24
The availability in clinical practice of at least 6 distinct maintenance drugs and several lymphocyte-depleting and IL-2–receptor antibody preparations now allows physicians much more flexibility in prescribing for the individual patient: a one-size-fits-all approach is no longer necessary or desirable. In this era of low acute rejection rates, more attention can be paid to minimizing adverse effects of immunosuppressants.25 For example, recipients with hyperlipidemia and hypertension can be prescribed tacrolimus, MMF, and minimal doses of steroids; conversely, cyclosporine might be preferable to tacrolimus in recipients at high risk for posttransplantation diabetes (see below). There is also a revival of interest in regimens that avoid or minimize exposure to steroids or the CNIs.25,26
Has the improvement in short-term outcomes been won at a heavy price? Are we seeing more complications related to excessive immunosuppression? On the one hand, recipient survival is clearly improving. Lower rates of acute rejection mean that fewer recipients are receiving extra courses of high-dose steroids and lymphocyte-depleting antibodies to treat acute rejection. Rates of CMV disease have fallen in the 1990s, reflecting better antiviral prophylaxis.6 On the other hand, the rise in incidence of reported cases of polyoma virus–induced interstitial nephritis very likely reflects the consequences of more intense immunosuppression.27 There is also evidence that rates of posttransplantation lymphoproliferative disease (PTLD) have increased slightly in the 1990s.28 Only in the next 10 to 20 years will we learn whether the more intensive regimens of the 1990s lead to even greater rates of neoplasia late after transplantation.
Antibody responses against donor organs have traditionally proved very difficult to control. Recently, important advances have been made in reversing humoral immune responses after transplantation (ie, treating humoral rejection) and before transplantation (ie, eliminating donor-recipient HLA incompatibilities, thus allowing more transplantation procedures).
Acute humoral rejection is less common than cellular rejection but is traditionally associated with a poorer prognosis. Staining of the renal peritubular capillaries for the complement-split product, C4d, has been shown to be an accurate histologic marker of this form of rejection, facilitating its early identification.29 We and others have also demonstrated that a regimen of plasmapheresis (to remove antidonor antibodies) and of enhanced immunosuppression with tacrolimus, MMF, and intravenous IgG (to further suppress production of antidonor antibodies) can completely reverse humoral rejection, allowing restoration of excellent graft function.30,31 The role of sirolimus in controlling humoral rejection is being evaluated.
A significant minority of patients are "highly sensitized" in that they exhibit immunological reactivity to a broad panel of non–self-human antigens, particularly those of the HLA system. This reactivity reflects prior exposure to the antigens via previous transplantation, blood products, or pregnancy. All physicians involved in the care of patients with renal disease should be aware that transfusion of red cells to these patients should be minimized; if required, leukocyte-depleted products should be used, as it is the leukocyte fraction that is richest in HLA antigens. Fortunately, erythropoietin can reduce transfusion requirements and related sensitization.32
Highly sensitized patients often cannot find a compatible living donor and wait many years for a cadaveric allograft. A recent exciting advance has been the successful desensitization of some of these patients, thus allowing living donor or cadaveric transplantation. Several centers have reported encouraging short-term results—albeit with small patient numbers—using regimens that include plasmapheresis, intravenous IgG, MMF, and tacrolimus.30,33 Intravenous IgG alone may suffice, and has the advantage of minimal toxicity.34 Longer-term results of these approaches are awaited but desensitization will likely be increasingly offered to highly sensitized patients otherwise precluded from transplantation. Excellent short-term outcomes are also being reported with transplantation of kidneys across the ABO blood group "barrier."
The incidence and prevalence of dialysis-dependent patients continue to increase in the United States and in most other countries.4 Unfortunately, even with a slight increase in numbers of available cadaveric donors and a large increase in living donors, transplantation has not kept pace with the
"epidemic" of ESRD. Thus, the rate of transplantation per 100 dialysis patient-years is actually decreasing in the United States (Figure 3). The inevitable outcome is longer waiting times. Indeed, the management of patients on the waiting list (ensuring that they remain fit for transplantation) is becoming a significant workload for larger transplantation centers.7 Several strategies to increase organ donation rates are discussed here.
The many advantages of living donation are summarized in Table 2. One major advantage is that preemptive transplantation (before the need for dialysis) is more feasible. Not only does preemptive transplantation avoid complications associated with dialysis itself but recent studies show it to be associated with less acute rejection and better allograft survival rates.35 This intriguing finding may reflect the avoidance of proinflammatory effects of advanced uremia or dialysis itself. For preemptive transplantation to succeed, early referral of patients with chronic kidney disease to nephrologists and transplantation centers is essential.
The advantages of living donation—and greater public and provider awareness of this method—have spurred increases of 68% and 1000%, respectively, in the numbers of living related and unrelated donors in the United States over the last decade.4 In fact, the number of living donors surpassed that of cadaveric donors in 2001 and 2002.36 Despite the poor matching for HLA antigens associated with unrelated donation, outcomes are excellent.37 This emphasizes the benefits of transplanting a healthy kidney, ie, minimum perioperative ischemia and reperfusion injury.
For reasons that include patient preference, surgeon preference, and marketing strategy, laparoscopic nephrectomy is becoming the donor nephrectomy method of choice in the larger transplantation centers in the United States.38 To the donor it has the benefits of less postoperative pain, quicker convalescence, and a better cosmetic result than with the traditional open nephrectomy. These benefits have probably contributed to the increase in donation rates.39 Disadvantages of the laparoscopic method include higher rates of early graft dysfunction, probably for the following reasons: higher intra-abdominal pressures during the procedure; longer warm ischemia duration; less experience with the technique, which entails a learning curve; and more manipulation of the renal vessels.40 One randomized trial found better donor and similar recipient outcomes with hand-assisted than with open live-donor nephrectomy.41
Balancing the professional goal of alleviating the recipient's illness with the precept, "first, do no harm," is crucial in the evaluation of potential living kidney donors. Four conditions must be satisfied before living donation can proceed: the risk to the donor must be low, the donor must be fully informed, the decision to donate must be independent and voluntary, and there must be a good chance of a successful recipient outcome.42 To avoid conflict of interest, the proposed donor should be meticulously evaluated by a physician not involved in the care of the proposed recipient.43 Potential kidney donors must be informed that donation can have adverse medical, psychological, and financial consequences. Most donors, however, obtain psychological benefits from the donation.43 With the careful selection of donors and an experienced surgical team, the incidence of serious perioperative complications is very low.43 Most follow-up studies of individuals who have undergone nephrectomy—for donation or other reasons—are reassuring.43,44 Minor asymptomatic increases in urinary protein excretion may occur and blood pressure may become slightly elevated. Nevertheless, concern has been expressed about the long-term medical consequences to the donor.45,46 Ideally, a national registry of donors would be established to allow more rigorous long-term follow-up.
As the name suggests, expanded-criteria donors are those who traditionally would not have been considered for donation. The term applies mainly, but not exclusively, to cadaveric donation and encompasses non–heart-beating (NHB) donors, elderly donors, and donors with diseases such as hypertension; examples are shown in Table 3. The main concern regarding many forms of expanded-criteria donation is the transplantation of an inadequately functioning nephron mass.47 Not surprisingly, short- and long-term outcomes with such grafts have been somewhat inferior to those obtained from "normal-criteria" donors. However, it should be emphasized that receiving an allograft from an expanded-criteria donor confers a significant survival advantage over remaining a dialysis patient on the transplant waiting list.48
Most deceased donors have sustained brainstem death but maintain renal perfusion because of cardiorespiratory support. It is estimated that the use of allografts from NHB donors—those who have sustained cardiorespiratory arrest—could increase the supply of cadaveric kidneys by up to 40%.49 The use of allografts from NHB donors has lagged behind this estimate, particularly in the United States, for several reasons.49 First, early posttransplantation complications, such as primary nonfunction of the allograft and delayed graft function, are more common with such allografts. This is to be expected, as prolonged hypotension is well known to cause temporary ischemic kidney damage. Case-control studies and registry data, however, indicate that it does not necessarily translate into poorer long-term graft or recipient survival.50,51 Second, in many centers, there are unresolved—but not unresolvable—legal and logistic issues related to the diagnosis of cardiac death in the context of organ retrieval; to obtaining family consent; and to starting organ preservation measures (such as in situ cooling) before consent is available.52 Very clear criteria for cardiac death must be developed by cardiologists and anesthesiologists (as neurologists have for brain death) and there must never be a perception that an anticipated NHB donation is influencing the care of critically ill patients. It is important to note that, albeit with much effort, NHB donation can significantly increase organ supply and yield excellent results.
Spain has achieved near-maximal rates of cadaveric organ donation through the placement of specialized transplant-coordinating staff in all major hospitals,53 and the Spanish model is currently being implemented in other countries. In the United States, using a standardized donor management protocol has been shown to increase organ procurement rates.54 Other proposals to increase rates of cadaveric donation include adoption of presumed consent to donate (in practice, not feasible in the United States or many Western countries), and use of financial incentives.55 The latter is highly controversial, and opponents argue that it represents the commodification of body parts and encourages exploitation of the poor.
If transplantation of vital organs from other mammals to humans were safe and effective, the organ crisis could be solved. Most research has concentrated on pig-into-primate combinations. Several major obstacles must be overcome before transplantation of whole porcine organs into humans becomes feasible. First, although hyperacute rejection involving complement has been mostly overcome by the development of transgenic pigs expressing human regulators of complement activity, very aggressive forms of humoral and cellular rejection still occur.56 Preventing these immunological complications may require very aggressive immunosuppression. Second, the transmission of zoonotic diseases—particularly retroviruses—from animals to humans remains a concern.57,58 Third, even if immunological incompatibility is overcome, physiological incompatibility may be problematic. Would a porcine kidney respond correctly to human antidiuretic hormone? Would its vitamin D derivatives and erythropoietin be physiologically active? Finally, research in this area is extremely expensive and industry support has waned somewhat.59 Thus, whole-organ xenotransplantation is unlikely to become a clinical reality soon. Alternatives under study include transplantation of xenogeneic cells or pancreatic islets, or immunoisolation of xenogeneic tissue in capsules or membranes.
The elderly are forming an increasing percentage of the ESRD population.4 Many elderly patients with ESRD have significant comorbidity, particularly cardiovascular disease and type 2 diabetes. Nevertheless, age per se is not a contraindication to transplantation: among elderly patients carefully screened and deemed fit for the procedure, long-term outcomes are clearly better with transplantation than dialysis.2,60,61 The percentage of recipients with a functioning transplant who are older than 65 years increased from 5.2% in 1992 to 10.6% in 2000,4 and elderly patients experience the same benefits from living donor transplantation as younger patients. Interestingly, although matching older cadaveric kidneys to older recipients is a common practice—presumably in an attempt to allocate the best kidneys to younger recipients—it may not improve overall graft survival.62 Elderly recipients should receive less intensive immunosuppression to minimize their relatively high risk of infection, as acute rejection is less common in this subgroup.
In the United States, incident rates of ESRD in blacks continue to be almost 4 times greater than in whites.4 Even after adjusting for clinical variables, blacks are less likely than whites to be referred for renal transplantation evaluation and to be placed on a transplant waiting list.63,64 When they are, rates of living and cadaveric donor transplantation are also lower for blacks, who thus must wait longer for a transplant. The latter discrepancy is illustrated in Figure 3. Potential living donors who are black are more likely to have contraindications to donation such as hypertension or type 2 diabetes mellitus.65 Although 12% of cadaveric donors are black, a percentage similar to that of blacks in the United States, overall, most cadaveric donors are white.66 Because distributions of HLA antigens and ABO blood groups differ between blacks and whites, algorithms of organ distribution based on these important characteristics tend to disadvantage blacks. Furthermore, blacks on the waiting list tend to be more highly sensitized to HLA antigens than their white counterparts. Adjusting for these variables reduces but does not eliminate discrepancies in wait-listing and in waiting times, however.67 These differences probably also reflect economic, psychosocial, and unmeasured clinical factors.67,68 Similar patterns regarding access to effective therapies such as cardiovascular surgery or treatment of lung cancer have been well documented.68,69 Even after renal transplantation, outcomes are poorer in blacks: compared with other ethnic groups, they have higher rates of acute rejection and lower rates of allograft survival.4,65
What can be done to remedy the discrepancies in access to renal transplantation? First, formal educational programs oriented toward minorities can significantly increase their living donation rates.70 Second, a reduction in emphasis on HLA matching in the organ allocation algorithm would direct more cadaveric kidneys to blacks (this change is now being implemented).71 Of course, such a change is only worthwhile if the slightly poorer HLA matching has minimal impact on allograft survival for all recipients.
What can be done to improve posttransplant outcomes in blacks? Standard measures, such as a greater use of living donors, will help. Post hoc analysis of several trials suggest that rejection rates in black recipients can be lowered with the use of tacrolimus or higher doses of MMF.72,73 Prolongation of Medicare or other benefits for immunosuppressive drugs will particularly benefit blacks, as they are at greater risk for late rejection and graft loss when immunosuppressive drugs are stopped because of the inability to pay.65 Single-center and registry data from the 1990s are showing a reduction in the racial disparities in renal allograft survival, probably due in large part to the use of the newer immunosuppressive drugs.70,74 More work needs to be done in this area, however.
Until recently, human immunodeficiency virus (HIV) infection was considered an absolute contraindication to renal transplantation. This reflected fears that immunosuppression would facilitate progression of infection, and that valuable allografts would be wasted because of the relatively short survival time of HIV-positive patients who undergo transplantation. With dramatic improvements in the survival time of HIV-positive patients, these premises are being reexamined.75,76 Those HIV-positive patients interested in transplantation should be referred to specialized centers as their management is extremely complex. One difficulty is the potential for significant interactions between the multiple antiviral and immunosuppressive medicines, some of which inhibit and some of which induce the cytochrome P450 system.75
Long-term graft survival is improving but remains inadequate. The principal causes of renal allograft loss beyond the first posttransplantation year are shown in Figure 4 (patient death is the principal cause of loss of a functioning graft).74 The main cause of death remains cardiovascular disease, followed by infection and malignancy. Because early outcomes have improved so much, the focus in renal transplantation is increasingly on strategies to improve long-term outcomes by preventing and treating cardiovascular disease, infection, bone disease, neoplasia, and chronic allograft nephropathy.17 It should be noted that many of the risk factors for cardiovascular and other diseases arise before transplantation, and therefore are associated with the uremic state. Thus, optimal management of the patient begins years before transplantation, in the chronic kidney disease stage. Better control of hypertension, anemia, hyperlipidemia, and hyperparathyroidism before transplantation will very likely lead to better posttransplant outomes.
American and European registry data show that the leading cause of death is cardiovascular disease (in 30%-40% of cases).77,78 Death rates from cardiovascular disease, although lower than in dialysis patients, still greatly exceed those of the general population. The cumulative incidence of coronary heart disease, cerebrovascular disease, and peripheral vascular disease 15 years after transplantation has been estimated at 23%, 15%, and 15%.79 Addressing risk factors for these conditions must now be an important component of routine posttransplant management (Table 4). Cessation of cigarette smoking is essential, not only to reduce the risk of cardiovascular disease but also because continued smoking after transplantation is associated with poorer renal allograft survival, even after censoring for death.80 Pretransplant screening for coronary heart disease is indicated in most patients.
A high prevalence of cardiomyopathy (presenting clinically as congestive heart failure or as left ventricular enlargement on echocardiography) has been noted in renal transplant recipients.81,82 One retrospective analysis found that the development of congestive heart failure after transplantation was as common as the development of coronary heart disease; furthermore, it was associated with the same risk of death.81 The authors thus proposed the interesting concept that transplant recipients are in a state of "accelerated heart failure." The effects of treating anemia and hypertension (which are very prevalent after transplant) on rates of development of cardiomyopathy require study.
Hypertension. The prevalence of hypertension after transplantation is at least 60% to 80%.79 Causes include steroid use, CNIs, weight gain, allograft dysfunction, native kidney disease, and, less commonly, transplant renal artery stenosis. The complications of posttransplant hypertension are presumed to be a heightened risk of cardiovascular disease and of allograft failure,83 although distinguishing cause and effect is difficult. Hypertension should thus be aggressively managed in all recipients. In general, the Seventh Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure (JNC VII) guidelines for the management of hypertension in the setting of chronic kidney disease, including a target blood pressure of less than 130/80 mm Hg, are appropriate.84 Nonpharmacological measures such as weight loss, moderation of sodium intake, moderation of alcohol intake, and increased exercise have traditionally not been emphasized in transplant clinics. The dosage of steroids and CNIs should be minimized, where possible. More than 1 antihypertensive drug therapy will still be required in many cases. Diuretics, angiotensin-converting enzyme inhibitors, and angiotensin receptor blockers should be used with caution in the first 3 months after transplantation as they may elevate plasma creatinine levels and thus complicate management. Although thiazide diuretics have the advantages of being well proven to reduce the cardiovascular complications of hypertension,85 of being inexpensive, and of enhancing the antihypertensive effects of other drugs,86 they are probably underused, as has been documented in the general hypertensive population.87 While studies have shown that angiotensin-converting enzyme inhibitors and angiotensin receptor blockers are safe and effective in treating posttransplant hypertension and in reducing proteinuria in the short-term, no long-term randomized studies have been published to date confirming specific renoprotective effects of these drugs in renal transplant recipients. Nevertheless, it seems reasonable to apply the same indications for their use as in the general hypertensive population.
Dyslipidemia. The prevalence of hypercholesterolemia and hypertriglyceridemia after transplantation has been estimated as 60% and 35%, respectively,79 mostly because of steroid, CNI (cyclosporine more than tacrolimus), and sirolimus use. Because cardiovascular disease is so prevalent in these patients, it is reasonable to consider the renal transplant recipient status a "coronary heart disease risk equivalent" when applying the guidelines. This implies targeting plasma low-density lipoprotein cholesterol levels less than 100 mg/dL (5.6 mmol/L) via a combination of therapeutic lifestyle changes and drug therapy. Reduction in steroid dose and switching from cyclosporine to tacrolimus therapy will also aid in the treatment of dyslipidemia.
Statins are the cholesterol-lowering drug of choice in transplant recipients. A recently published trial of statin use in renal transplant recipients showed them to be safe and effective in lowering plasma low-density lipoprotein cholesterol concentrations.88 Cardiac deaths and nonfatal myocardial infarcts, although not overall mortality, were reduced. Because the metabolism of many statins is partly inhibited by the CNIs, blood concentrations of statins may be increased in transplant recipients, thereby increasing their risk for adverse effects such as rhabdomyolysis. This interaction is further enhanced if additional inhibitors of cytochrome P450, eg, diltiazem, are administered.89 Measures to minimize the risk of statin toxicity include the following: starting with low statin doses; using pravastatin or fluvastatin (which appear to have the least interaction with CNIs); avoiding other inhibitors of the cytochrome P450 system; avoiding fibrates; and periodically checking plasma creatine kinase levels and liver function.89 Rarely, nonstatin drugs are used to lower plasma lipids in transplant patients. Bile acid sequestrants, if used, should be taken separately from CNIs as they impair absorption of these drugs. Fibrates should be prescribed with extreme caution to patients taking statins and CNIs.
Hyperhomocystinemia.Plasma homocysteine concentrations, which are elevated in patients receiving dialysis, typically fall after transplantation but do not normalize. One prospective study found hyperhomocystinemia in 70% of renal transplant patients, and hyperhomocystinemia was an independent risk factor for cardiovascular events.90 Until clinical trial results are available, no firm recommendations can be made regarding B vitamin therapies for lowering hyperhomocystinemia in transplant recipients. The effects of immunosuppressive drugs on plasma homocysteine concentrations, if there are any, remain to be determined.
Anemia. Ideally, kidney transplantation should lead to increased production of erythropoietin and resolution of anemia. However, although underappreciated, posttransplant anemia is a common problem. A cross-sectional analysis of transplant recipients in our clinic found an anemia prevalence of 40%91 and others have reported similar findings.92 This high prevalence mainly reflects suboptimal graft function and effects of medications that impair erythropoiesis (MMF, trimethoprim-sulfamethoxazole [SMX-TMP], and angiotensin-converting enzyme inhibitors). Observational studies have shown a strong association between posttransplant anemia and posttransplant development of congestive heart failure.82,93 Pending the results of studies of the treatment of posttransplantation anemia, it seems reasonable to manage anemia according to guidelines developed for patients with chronic (native) kidney disease, ie, focusing on iron repletion and using erythropoietin, for example.94
Diabetes mellitus remains the leading cause of ESRD in the United States and worldwide.4 The current epidemic of type 2 diabetes will likely further increase the incidence and prevalence of ESRD due to this condition. Although the survival of diabetic transplant recipients is less than that of their nondiabetic counterparts, transplantation still confers them a survival advantage compared with diabetic patients receiving dialysis as they remain on the waiting list.2
Glucose control worsens after transplantation because of glucocorticoid and CNI use, increased food intake, weight gain, and restoration of kidney function.95 Furthermore, diabetes now develops after transplantation in up to 10% of adults, although this incidence has decreased with the lower cumulative doses of steroids and CNIs now used.96 Risk factors for posttransplantation diabetes include greater recipient age, nonwhite ethnicity, steroid treatment for rejection, and high doses of CNIs.96 High-dose tacrolimus is particularly diabetogenic, and more so in recipients with hepatitis C infection.97,98
Rates of preventive care testing (eg, of eyes, plasma lipids, and blood hemoglobin A1c) in transplant recipients with diabetes, although higher than in dialysis patients, lag behind guidelines and indeed behind the rates for general Medicare patients.4 Again, this emphasizes the need to think beyond immunological management alone and adopt a multidisciplinary approach to the care of the renal transplant recipient. The major benefits of intensified multifactorial intervention in type 2 diabetes are clear.99,100 Adoption of such interventions in diabetic transplant recipients will require much effort, but will likely yield equally impressive results.
A subset of diabetic patients with ESRD are candidates for kidney-pancreas transplantation. The organs can be transplanted simultaneously or as a staged procedure (pancreas after kidney [PAK]). Pancreatic allograft survival with the PAK procedure has been poorer than with simultaneous transplantation but this difference is narrowing.101 Furthermore, PAK affords the advantages of preemptive living donor kidney transplantation, better renal transplant outcomes, and fewer surgical complications.102 The percentage of pancreas transplants performed via the PAK procedure has been recently increasing.101 Improvements in surgical techniques and immunosuppression have resulted in steady increases in pancreas allograft survival.101 There is accumulating evidence that, in selected patients, overall and cardiovascular mortality is reduced with kidney-pancreas transplantation compared with kidney transplantation alone.103,104 A benefit of reversing or halting the microvascular complications of diabetes is likely but randomized controlled trials have not been conducted and would be difficult to perform.105,106 Shapiro et al107 have reported excellent preliminary outomes in pancreatic islet transplantation in non-ESRD patients using a regimen of daclizumab, tacrolimus, and sirolimus without steroids. Studies in type 1 diabetic renal transplant recipients are now under way.
Data from tumor registries clearly demonstrate that the overall incidence of cancer in renal transplant recipients is greater than in patients receiving dialysis and in the general population.108,109 This increase in incidence applies to most cancers (at least in the Australian/New Zealand ANZDATA Registry), but the risks for certain transplant-associated cancers such as lymphomas and skin cancers are dramatically increased (Table 5).109 Interestingly, the incidence of common non–skin cancers (ie, breast, lung, colorectal, and prostate cancers) is only slightly increased, if at all.
The reported cancer incidence may be increased for several reasons. First, immunosuppression allows uncontrolled proliferation of oncogenic viruses and probably inhibits normal tumor surveillance mechanisms. There is experimental evidence that CNIs may have tumor-promoting effects mediated by their effects on transforming growth factor β production.110 Second, recipient factors related to the primary renal disease (eg, abuse of analgesics, hepatitis B infection, and hepatitis C infection) may also promote neoplasia. Finally, ascertainment bias may occur because of assiduous monitoring and reporting of transplantation patients.
The cumulative amount of immunosuppression, rather than a specific drug, is the most important factor increasing the cancer risk. Thus, the single most important measure to prevent cancers is to minimize excessive immunosuppression. Primary and secondary preventive strategies for breast, lung, bowel, and urogenital cancers (eg, mammography, smoking cessation, endoscopy, and pelvic examination in women) should be similar to those recommended for the general population but should be more rigorous for cancers of the skin.111 Thus, transplant recipients should be specifically counseled to minimize exposure to sun, wear protective clothing, and apply sunscreen to exposed areas. Premalignant skin lesions should be treated with cryotherapy or surgical excision.111,112
The long-term impact of the newer immunosuppression regimens on predisposition to cancer is unknown but it is certainly of some concern. A general rule is that when cancer occurs, immunosuppression should be greatly decreased. In some cases, rejection of the graft will result but the risks and benefits of continuing immunosuppression must be judged on a case-by-case basis. Unlike for lung and cardiac allografts, the loss of a renal allograft is not fatal, as dialysis is always an option. The potential antitumor effect of sirolimus has been discussed above.
The cumulative incidence of posttransplantation lymphoproliferative disease (PTLD) in renal transplant recipients is between 1% and 5%.79 More than 90% of PTLD cases are non-Hodgkin lymphomas, and most are of recipient B-cell origin.113 Most cases occur in the first 24 months after transplantation. Risk factors include (1) the combination of an Epstein-Barr virus (EBV)-positive donor and an EBV-negative recipient; (2) the combination of a CMV-positive donor and an CMV-negative recipient; (3) being a pediatric recipient (in part because children are more likely to be EBV-negative); and (4) receiving more intensive immunosuppression.114 Infection and transformation of B cells by EBV is important in the pathogenesis of many cases of PTLD: the proliferation that transformed B cells is initially polyclonal, but a malignant clone may evolve. The clinical and pathological spectrum and treatment of PTLD is quite variable and has been reviewed recently.115,116
Treatment involves reduction or cessation of immunosuppression and various combinations of antiviral therapy, radiotherapy, chemotherapy, and surgery. The prognosis of severe forms of PTLD has traditionally been poor but will likely improve. First, techniques for monitoring EBV load after transplantation are being developed. These may prove useful in identifying patients who are at high risk for developing PTLD or who have early disease, and ultimately facilitate pre-emptive therapy for such patients. Second, several relatively nontoxic immunotherapies have been developed and some are already in clinical use. These include biological immune modifiers such as interferon α and IL-6, adoptive immunotherapy with virus-specific T cells, and elimination of B cells using rituximab, an anti-CD20 monoclonal antibody.116 Although long-term data on the effects of rituximab in PTLD are awaited, the drug is being increasingly used because of its favorable therapeutic index.
The transplantation procedure and subsequent immunosuppression increase the risk of serious infection. As outlined by Fishman and Rubin,117 the patterns of infection can roughly be considered under 3 periods: 0 to 1 month, 1 to 6 months, and more than 6 months after transplantation.
Most infections in the first month after transplantation are similar to those that can be seen in any surgical ward: infections of wounds, lungs, and urinary tract and vascular catheters; they are usually treated accordingly.
Weeks of intense immunosuppression now increase the risk of opportunistic infections from microorganisms such as CMV, EBV, Listeria monocytogenes, Pneumocystis carinii, and Nocardia species. Typical preventive measures for infections from 1 to 6 months after transplantation include antiviral prophylaxis (for 3-6 months after transplantation) and SMX-TMP prophylaxis (for 6-12 months).
With reduction in immunosuppression, the risk of infection usually decreases in the long term (>6 months after transplantation) and becomes quite similar to that of the general population. Thus, a previously stable patient with a plasma creatinine concentration of 1.4 mg/dL (123.76 µmol/L) presenting 3 years after transplantation with community-acquired pneumonia is much more likely to have pneumococcal or mycoplasmal infection than pneumocystosis. Two groups, however, remain at significantly higher risk for opportunistic infection: those with poor graft function and those receiving late additional immunosuppression (typically in cases of rejection). These patients should continue to receive SMX-TMP.
Chances of exposure to CMV (as evidenced by anti-CMV IgG) increases with age, and more than two thirds of donors and recipients are latently infected prior to renal transplantation. Cytomegalovirus disease (confirmed by recent laboratory testing and evidence of tissue inflammation and/or dysfunction) can arise because of reactivation of latent recipient virus, reactivation of latent donor-derived virus, or primary infection with donor-derived virus.118 Not surprisingly, the risk of CMV disease is highest in CMV-positive donor/CMV-negative recipient pairings; lowest in CMV-negative donor/CMV-negative recipient pairings; and intermediate in CMV-positive donor/CMV-positive recipient pairings and CMV-negative donor/CMV-positive recipient pairings.
Cytomegalovirus disease usually arises 1 to 6 months after transplantation, although gastrointestinal and retinal involvement often occurs later. Typical clinical features are fever, malaise, and leukopenia; patients may be symptomatic or there may be laboratory evidence of specific organ involvement (Table 6). Urgent investigation and immediate empiric treatment is needed in severe cases. The virus can be detected in blood or tissue fluids by rapid shell-vial culture, antigen assays, or polymerase chain reaction (serology studies are of little benefit in the acute setting); the optimal test depends on local expertise. The virus can also be identified in involved tissue by immunohistochemistry techniques. Importantly, low or negative CMV concentrations in peripheral blood do not exclude organ involvement (especially of the gastrointestinal tract); therefore, bronchoscopy, endoscopy, or any other appropriate investigation should be aggressively pursued according to symtoms and signs. A tissue diagnosis is also required to exclude coinfection with other microbes, eg, P carinii.
Cytomegalovirus disease is treated with reduction in immunosuppression and specific antiviral agents, usually ganciclovir or valganciclovir. The latter has much better oral bioavailability and is increasingly used instead of intravenous ganciclovir. Foscarnet is nephrotoxic and should only be used for the rare cases resistant to ganciclovir. Although supportive data for the treatment are unavailable, it is reasonable to add CMV hyperimmune globulin in severe cases.
The prevention of CMV disease is of great clinical importance. One strategy is to provide prophylaxis to all patients at risk, ie, when the donor and/or recipient has positive serology findings for CMV. Another strategy is to provide prophylaxis only to those at greatest risk or those who show laboratory evidence of new virus replication. Both have advantages and disadvantages.119 Ganciclovir and valganciclovir are commonly used as preventive agents, typically for 3 to 4 months after transplantation. One randomized controlled trial found valacyclovir to be effective in preventing CMV disease120 but further studies are needed to confirm this report.
In the absence of prophylaxis, pneumocystosis occurs most commonly in the first year after transplantation (although not in the first month) but can occur later, especially if immunosuppression is increased. Typical symptoms of pneumonia due to P carinii are fever, shortness of breath, and nonproductive cough. Chest radiography characteristically shows bilateral interstitial-alveolar infiltrates. Diagnosis requires detection of the organism in a clinical specimen by colorimetric or immunofluorescent stains. Because the organism burden is usually lower than in HIV-infected patients, the sensitivity of induced sputum or bronchoalveolar lavage specimens is lower in renal transplant recipients; tissue biopsy should be quickly obtained if these tests are negative and the clinical suspicion remains high. The treatment of choice remains SMX-TMP.121 High-dose SMX-TMP may increase plasma creatinine concentration without affecting glomerular filtration rate, ie, "real" kidney function. Unlike in HIV-positive patients, there is no firm evidence to support the use of higher-dose steroids during the early treatment phase of pneumocystosis in renal transplant patients.
Fortunately, antimicrobial prophylaxis is very effective in preventing pneumonia due to P carinii. The preventive agent of choice is SMX-TMP: it is inexpensive and generally well tolerated, and also prevents urinary tract infections and opportunistic infections such as nocardiosis, toxoplasmosis, and listeriosis. Alternatives drugs include dapsone with or without pyrimethamine, atovaquone, and aerosolized pentamadine.
This topic has been recently reviewed.122 Generally accepted guidelines are as follows: (1) immunizations should be completed at least 4 weeks before transplantation; (2) immunization should be avoided in the first 6 months after transplantation because of ongoing administration of high-dose immunosuppressive agents and a risk of provoking graft dysfunction; and (3) live vaccines should be avoided altogether after transplantation. Household members of transplant recipients should receive yearly immunization against influenza.
Minimizing infection risk after transplantation requires a meticulous surgical technique; monitoring or prophylaxis for viral infection in the first 3 to 6 months; SMX-TMP prophylaxis for the first 6 to 12 months; and, of course, avoidance of excessive immunosuppression. The last point is particularly relevant for elderly recipients. When infection is suspected, early aggressive diagnosis (eg, by bronchoscopy in patients with pneumonitis) and therapy are essential.117 If life-threatening infection occurs, immunosuppression must be stopped or greatly reduced (stress-dose steroids may still be required).
Bone disease in the dialysis patient is multifactorial and involves varying degrees of hyperparathyroidism, vitamin D deficiency, adynamic bone disease, aluminum intoxication, and amyloidosis. Successful renal transplantation offers the potential to reverse or at least prevent further progression of these conditions. Unfortunately, bone disease can be a major problem after renal transplantation because of the persistence of the above conditions, suboptimal kidney function, and the superimposed effects of steroids on bone.
Reduction in bone mineral density is now recognized as a very common complication of solid organ transplantation: one recent review estimated an incidence up to 60% in the first 18 months after renal transplantation.79 It is important to note that the pathophysiology and treatment of osteoporosis may differ from that seen in the general population (Table 7). The principal cause is steroid use—through direct inhibition of osteoblastogenesis, induction of apoptosis in bone cells, inhibition of sex hormone production (in both men and women), decreased gut calcium absorption, and increased urinary calcium excretion.123 Other factors that may play a role include persistent hyperparathyroidism after transplantation, postmenopausal state, vitamin D deficiency and/or resistance, and phosphate depletion. Low bone mineral density is likely to be a risk factor for fractures in renal transplant recipients, although this has not yet been proven. In fact, limited evidence suggests that low bone mineral density, as identified by dual-energy x-ray absorptiometry (DEXA), is not a risk factor for future fracture.124,125 There is no doubt, however, that pathological fractures are common after renal transplantation. The estimated total fracture rate after renal transplantation, which is 2% per year in nondiabetic patients, 5% per year in type 1 diabetic patients, and up to 12% per year in pancreas-kidney recipients.126
Interventions to minimize posttransplantation bone loss are summarized in Table 7. Implementation of these measures immediately after transplantation is essential, as most of the bone loss occurs in the first 6 months when the doses of steroids are highest. The role of DEXA in the diagnosis and of bisphosphonates in the prevention of posttransplantation bone disease requires further prospective study. There is evidence that bisphosphonates effectively prevent posttransplantation bone loss,127 but trials reported to date have been underpowered to detect reductions in posttransplantation fracture rates. There is still some concern that these agents, by suppressing bone remodeling, could worsen the mechanical integrity of bone in conditions such as osteomalacia or adynamic bone.126 Furthermore, albeit at very high doses, bisphosphonates can be nephrotoxic.128 A reasonable approach is to obtain DEXA of 3 bone sites (lumbar spine, forearm, and hip) at the time of transplantation in patients with conventional risk factors for osteoporosis. In those considered to be at high risk for osteoporosis-related fracture based on clinical features and DEXA results, posttransplantation administration of bisphosphonates and the use of minimal-dose steroids or of nonsteroid protocols should be considered. Close collaboration with a bone endocrinologist in these situations is also advised. All patients should receive calcium and synthetic forms of vitamin D after kidney transplantation unless there are contraindications.
Incomplete resolution of hyperparathyroidism is very common after renal transplantation. This reflects multiple factors: inherent slow involution of parathyroid cells, suboptimal renal function, suboptimal production of 1,25-dihydroxyvitamin D3, and steroid-induced reduction in intestinal calcium absorption.125 Posttransplantation hyperparathyroidism can cause hypercalcemia and exacerbate bone loss. If hypercalcemia is severe and associated with complications such as graft dysfunction, early parathyroidectomy is indicated. Less severe cases can be given a trial of medical therapy. Better control of hyperparathyroidism before transplantation remains the key to preventing significant posttransplantation parathyroid disease.
Osteonecrosis or avascular necrosis of bone has been reported to occur in 3% to 16% of renal transplant recipients.129 Hip, knee, ankle, shoulder, or elbow joints can be involved. If severe, significant joint damage can occur. The principal cause is steroid use. Fortunately, the incidence has greatly declined because renal transplant recipients now receive lower cumulative doses of steroids (maintenance doses are lower, and fewer
"pulses" are required because acute rejection is less common). The presenting symptom is joint pain. Magnetic resonance imaging, radionuclide bone scan, and plain films (in order of decreasing sensitivity) are used to confirm the diagnosis. Severe cases require surgery.
After censoring for death, chronic allograft nephropathy is the most important cause of long-term graft loss. This has been recently reviewed.17Chronic allograft nephropathy is preferable to the older designation of chronic rejection because it encompasses the role of immunological and nonimmunological factors (Table 8). Typical clinical features of chronic allograft nephropathy are hypertension, low-grade proteinuria, and slowly rising plasma creatinine level more than 6 months after transplantation. There is often a history of acute rejection. Specific treatment options are limited and progression to ESRD is usually slow but inevitable. Recipients with failing allografts should be managed similarly to those with native chronic kidney disease with regard to treatment of anemia, hyperparathyroidism, hypertension, and other complications of renal failure. Chronic allograft nephropathy itself is not a contraindication to future transplantation; indeed, allograft loss is a common diagnosis in those joining the organ waiting list.
The last 10 years have seen continued advances in our ability to safely suppress the renal transplant recipient's antigraft immune responses. This has resulted in impressive increases in allograft survival. Although acute rejection and early graft loss have become relatively uncommon, late graft loss and premature death (mainly from cardiovascular disease) remain major challenges. The focus is therefore switching not only toward the prevention and treatment of chronic allograft nephropathy and other medical complications of renal transplantation, but also of cardiovascular disease and the prior uremic state. In many cases, recipients must be managed similarly to those with native kidney disease—with rigorous control of hypertension, dyslipidemia, anemia, hyperparathyroidism, and diabetes. This will require significant effort from transplant physicians, primary care physicians, and other specialists. Despite impressive increases in the numbers of living donors and efforts to expand cadaveric donation, the organ shortage also remains an important problem.
Correspondence: Colm C. Magee, MD, Renal Division, MRB-4, Brigham and Women's Hospital, 75 Francis St, Boston, MA 02115 (firstname.lastname@example.org).
Accepted for publication July 18, 2003.
This work was supported in part by the Leenards Foundation, Lausanne, Switzerland (Dr Pascual).