Well, I may be unclear of your reason for fixing this but, if you read all of the pages in the link I posted,
it explains how the search engines read and value links. If you have "duplicate content" which just
means two or more links that have the same base page, it throws an error. Unless you fix this by three
different ways. One would be to create a new page for each link. This is not good of course for any site.
Next, you can hide the links in some tricky programming code. Or, just tell the search engine that the
pages are identical so they are not scanned. This does drop one point on Google's point scale as it is
considered a redirect which they do not like for some reason.
Anyways, here is the paragraph from that post that explains how to add the "REL" option to the links
to indicate they are duplicates and you should not be thrown an error. Hope this helps you.... Good luck...
Another option for dealing with duplicate content is to utilize the rel=canonical tag. The rel=canonical tag passes the same amount of link juice (ranking power) as a 301 redirect, and often takes much less development time to implement.
The tag is part of the HTML head of a web page. This meta tag isn't new, but like nofollow, simply uses a new rel parameter. For example:
<link href="http://www.example.com/canonical-version-of-page/" rel="canonical" />
This tag tells Bing and Google that the given page should be treated as though it were a copy of the URL www.example.com/canonical-version-of-page/ and that all of the links and content metrics the engines apply should actually be credited toward the provided URL.
NOTES: Since this is inside the <HEADER> tags it does not show up as a link on the page. It does show
up if someone views the source of the page. Also, this can cause small issues if you have links inside the
page that you do want to be included inside of the search engine's spider list. So, you have to think out
how the page will be called.