SQL -- 删除重复对

SQL -- Remove duplicate pairs(SQL -- 删除重复对)
本文介绍了SQL -- 删除重复对的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用 SQLite 来存储一组使用两列 u 和 v 的图的无向边.例如:

I'm using an SQLite to store a set of undirected edges of a graph using two columns, u and v. For example:

u v

1 2

3 2

2 1

3 4

我已经通过 SELECT DISTINCT * FROM 边完成了它并删除了所有重复的行.

I have already been through it with SELECT DISTINCT * FROM edges and removed all duplicate rows.

然而,如果我们还记得这些是无向边,仍然会有重复.在上面的例子中,边 (1,2) 出现了两次,一次是 (1,2),一次是 (2,1),它们都是等价的.

However, there are still duplicates if we remember these are undirected edges. In the above example, the edge (1,2) appears twice, once as (1,2) and once as (2,1) which are both equivalent.

我希望删除所有此类重复项,只留下其中一个,即 (1,2) 或 (2,1) - 哪个并不重要.

I wish to remove all such duplicates leaving only one of them, either (1,2) or (2,1) -- it doesn't really matter which.

任何想法如何实现这一目标?谢谢!

Any ideas how to achieve this? Thanks!

推荐答案

如果存在相同对(反向),则取 u>v 所在的那个.

If the same pair (reversed) exists take the one where u>v.

SELECT DISTINCT u,v
FROM table t1 
WHERE t1.u > t1.v
    OR NOT EXISTS (
        SELECT * FROM table t2 
            WHERE t2.u = t1.v AND t2.v = t1.u 
    )

这篇关于SQL -- 删除重复对的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!

本站部分内容来源互联网,如果有图片或者内容侵犯您的权益请联系我们删除!

相关文档推荐

Execute complex raw SQL query in EF6(在EF6中执行复杂的原始SQL查询)
Hibernate reactive No Vert.x context active in aws rds(AWS RDS中的休眠反应性非Vert.x上下文处于活动状态)
Bulk insert with mysql2 and NodeJs throws 500(使用mysql2和NodeJS的大容量插入抛出500)
Flask + PyMySQL giving error no attribute #39;settimeout#39;(FlASK+PyMySQL给出错误,没有属性#39;setTimeout#39;)
auto_increment column for a group of rows?(一组行的AUTO_INCREMENT列?)
Sort by ID DESC(按ID代码排序)